Mar 17 18:51:47.009462 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:51:47.009487 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:51:47.009495 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 18:51:47.009503 kernel: printk: bootconsole [pl11] enabled Mar 17 18:51:47.009508 kernel: efi: EFI v2.70 by EDK II Mar 17 18:51:47.009514 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Mar 17 18:51:47.009520 kernel: random: crng init done Mar 17 18:51:47.009526 kernel: ACPI: Early table checksum verification disabled Mar 17 18:51:47.009531 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 18:51:47.009537 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009542 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009548 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 18:51:47.009554 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009560 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009567 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009572 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009578 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009585 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009591 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 18:51:47.009597 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:51:47.009602 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 18:51:47.009608 kernel: NUMA: Failed to initialise from firmware Mar 17 18:51:47.009614 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:51:47.009620 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Mar 17 18:51:47.009626 kernel: Zone ranges: Mar 17 18:51:47.009631 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 18:51:47.009637 kernel: DMA32 empty Mar 17 18:51:47.009642 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:51:47.009649 kernel: Movable zone start for each node Mar 17 18:51:47.009655 kernel: Early memory node ranges Mar 17 18:51:47.009660 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 18:51:47.009666 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 18:51:47.009672 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 18:51:47.009678 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 18:51:47.009683 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 18:51:47.009689 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 18:51:47.009695 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:51:47.009700 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:51:47.009706 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 18:51:47.009712 kernel: psci: probing for conduit method from ACPI. Mar 17 18:51:47.009721 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:51:47.009727 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:51:47.009734 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 18:51:47.009740 kernel: psci: SMC Calling Convention v1.4 Mar 17 18:51:47.009746 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Mar 17 18:51:47.009753 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Mar 17 18:51:47.009759 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:51:47.009765 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:51:47.009771 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:51:47.009778 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:51:47.009784 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:51:47.009790 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:51:47.009796 kernel: CPU features: detected: Spectre-BHB Mar 17 18:51:47.009802 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:51:47.009808 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:51:47.009814 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:51:47.009821 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 18:51:47.009827 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:51:47.009833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 18:51:47.009839 kernel: Policy zone: Normal Mar 17 18:51:47.009847 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:51:47.009854 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:51:47.009860 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:51:47.009866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:51:47.009872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:51:47.009878 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Mar 17 18:51:47.009885 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) Mar 17 18:51:47.009892 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:51:47.009899 kernel: trace event string verifier disabled Mar 17 18:51:47.009905 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:51:47.009911 kernel: rcu: RCU event tracing is enabled. Mar 17 18:51:47.009918 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:51:47.009924 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:51:47.009930 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:51:47.009936 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:51:47.009942 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:51:47.009948 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:51:47.009954 kernel: GICv3: 960 SPIs implemented Mar 17 18:51:47.009961 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:51:47.009967 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:51:47.009973 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:51:47.009979 kernel: GICv3: 16 PPIs implemented Mar 17 18:51:47.009985 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 18:51:47.009991 kernel: ITS: No ITS available, not enabling LPIs Mar 17 18:51:47.009997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:51:47.010003 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:51:47.010010 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:51:47.010016 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:51:47.010023 kernel: Console: colour dummy device 80x25 Mar 17 18:51:47.010030 kernel: printk: console [tty1] enabled Mar 17 18:51:47.010037 kernel: ACPI: Core revision 20210730 Mar 17 18:51:47.010043 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:51:47.010050 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:51:47.010056 kernel: LSM: Security Framework initializing Mar 17 18:51:47.010062 kernel: SELinux: Initializing. Mar 17 18:51:47.010069 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:51:47.010075 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:51:47.010081 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 18:51:47.010089 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 18:51:47.010095 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:51:47.010102 kernel: Remapping and enabling EFI services. Mar 17 18:51:47.010108 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:51:47.010114 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:51:47.010121 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 18:51:47.010127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:51:47.010133 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:51:47.010139 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:51:47.010146 kernel: SMP: Total of 2 processors activated. Mar 17 18:51:47.010153 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:51:47.010160 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 18:51:47.010166 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:51:47.010173 kernel: CPU features: detected: CRC32 instructions Mar 17 18:51:47.010179 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:51:47.010185 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:51:47.010191 kernel: CPU features: detected: Privileged Access Never Mar 17 18:51:47.010198 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:51:47.010204 kernel: alternatives: patching kernel code Mar 17 18:51:47.010211 kernel: devtmpfs: initialized Mar 17 18:51:47.010222 kernel: KASLR enabled Mar 17 18:51:47.010229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:51:47.010237 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:51:47.010243 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:51:47.010250 kernel: SMBIOS 3.1.0 present. Mar 17 18:51:47.010257 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 18:51:47.010263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:51:47.010270 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:51:47.010278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:51:47.010285 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:51:47.010291 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:51:47.010298 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Mar 17 18:51:47.010305 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:51:47.010312 kernel: cpuidle: using governor menu Mar 17 18:51:47.010318 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:51:47.010326 kernel: ASID allocator initialised with 32768 entries Mar 17 18:51:47.010332 kernel: ACPI: bus type PCI registered Mar 17 18:51:47.010339 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:51:47.010346 kernel: Serial: AMBA PL011 UART driver Mar 17 18:51:47.010352 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:51:47.010359 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:51:47.010365 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:51:47.010372 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:51:47.010391 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:51:47.010403 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:51:47.010410 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:51:47.010416 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:51:47.010423 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:51:47.010429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:51:47.010436 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:51:47.010443 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:51:47.010449 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:51:47.010456 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:51:47.010463 kernel: ACPI: Interpreter enabled Mar 17 18:51:47.010470 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:51:47.010477 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:51:47.010483 kernel: printk: console [ttyAMA0] enabled Mar 17 18:51:47.010490 kernel: printk: bootconsole [pl11] disabled Mar 17 18:51:47.010497 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 18:51:47.010503 kernel: iommu: Default domain type: Translated Mar 17 18:51:47.010510 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:51:47.010516 kernel: vgaarb: loaded Mar 17 18:51:47.010523 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:51:47.010531 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:51:47.010538 kernel: PTP clock support registered Mar 17 18:51:47.010544 kernel: Registered efivars operations Mar 17 18:51:47.010551 kernel: No ACPI PMU IRQ for CPU0 Mar 17 18:51:47.010557 kernel: No ACPI PMU IRQ for CPU1 Mar 17 18:51:47.010564 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:51:47.010570 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:51:47.010577 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:51:47.010585 kernel: pnp: PnP ACPI init Mar 17 18:51:47.010592 kernel: pnp: PnP ACPI: found 0 devices Mar 17 18:51:47.010598 kernel: NET: Registered PF_INET protocol family Mar 17 18:51:47.010605 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:51:47.010612 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:51:47.010618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:51:47.010625 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:51:47.010632 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:51:47.010639 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:51:47.010647 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:51:47.010653 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:51:47.010660 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:51:47.010667 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:51:47.010673 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 18:51:47.010680 kernel: kvm [1]: HYP mode not available Mar 17 18:51:47.010686 kernel: Initialise system trusted keyrings Mar 17 18:51:47.010693 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:51:47.010700 kernel: Key type asymmetric registered Mar 17 18:51:47.010708 kernel: Asymmetric key parser 'x509' registered Mar 17 18:51:47.010714 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:51:47.010721 kernel: io scheduler mq-deadline registered Mar 17 18:51:47.010727 kernel: io scheduler kyber registered Mar 17 18:51:47.010734 kernel: io scheduler bfq registered Mar 17 18:51:47.010740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:51:47.010747 kernel: thunder_xcv, ver 1.0 Mar 17 18:51:47.010753 kernel: thunder_bgx, ver 1.0 Mar 17 18:51:47.010760 kernel: nicpf, ver 1.0 Mar 17 18:51:47.010766 kernel: nicvf, ver 1.0 Mar 17 18:51:47.010889 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:51:47.010950 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:51:46 UTC (1742237506) Mar 17 18:51:47.010960 kernel: efifb: probing for efifb Mar 17 18:51:47.010967 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:51:47.010973 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:51:47.010980 kernel: efifb: scrolling: redraw Mar 17 18:51:47.010987 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:51:47.010996 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:51:47.011002 kernel: fb0: EFI VGA frame buffer device Mar 17 18:51:47.011009 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 18:51:47.011016 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:51:47.011022 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:51:47.011029 kernel: Segment Routing with IPv6 Mar 17 18:51:47.011036 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:51:47.011042 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:51:47.011049 kernel: Key type dns_resolver registered Mar 17 18:51:47.011055 kernel: registered taskstats version 1 Mar 17 18:51:47.011063 kernel: Loading compiled-in X.509 certificates Mar 17 18:51:47.011070 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:51:47.011077 kernel: Key type .fscrypt registered Mar 17 18:51:47.011084 kernel: Key type fscrypt-provisioning registered Mar 17 18:51:47.011091 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:51:47.011106 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:51:47.011113 kernel: ima: No architecture policies found Mar 17 18:51:47.011120 kernel: clk: Disabling unused clocks Mar 17 18:51:47.011128 kernel: Freeing unused kernel memory: 36416K Mar 17 18:51:47.011135 kernel: Run /init as init process Mar 17 18:51:47.011141 kernel: with arguments: Mar 17 18:51:47.011148 kernel: /init Mar 17 18:51:47.011154 kernel: with environment: Mar 17 18:51:47.011160 kernel: HOME=/ Mar 17 18:51:47.011167 kernel: TERM=linux Mar 17 18:51:47.011173 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:51:47.011182 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:51:47.011193 systemd[1]: Detected virtualization microsoft. Mar 17 18:51:47.011200 systemd[1]: Detected architecture arm64. Mar 17 18:51:47.011207 systemd[1]: Running in initrd. Mar 17 18:51:47.011214 systemd[1]: No hostname configured, using default hostname. Mar 17 18:51:47.011221 systemd[1]: Hostname set to . Mar 17 18:51:47.011232 systemd[1]: Initializing machine ID from random generator. Mar 17 18:51:47.011239 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:51:47.011247 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:51:47.011255 systemd[1]: Reached target cryptsetup.target. Mar 17 18:51:47.011262 systemd[1]: Reached target paths.target. Mar 17 18:51:47.011268 systemd[1]: Reached target slices.target. Mar 17 18:51:47.011279 systemd[1]: Reached target swap.target. Mar 17 18:51:47.011286 systemd[1]: Reached target timers.target. Mar 17 18:51:47.011293 systemd[1]: Listening on iscsid.socket. Mar 17 18:51:47.011300 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:51:47.011309 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:51:47.011319 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:51:47.011326 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:51:47.011333 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:51:47.011340 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:51:47.011347 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:51:47.011357 systemd[1]: Reached target sockets.target. Mar 17 18:51:47.011365 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:51:47.011371 systemd[1]: Finished network-cleanup.service. Mar 17 18:51:47.028038 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:51:47.028078 systemd[1]: Starting systemd-journald.service... Mar 17 18:51:47.028087 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:51:47.028095 systemd[1]: Starting systemd-resolved.service... Mar 17 18:51:47.028102 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:51:47.028110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:51:47.028124 systemd-journald[276]: Journal started Mar 17 18:51:47.028192 systemd-journald[276]: Runtime Journal (/run/log/journal/77368532060f480cb68620c0aa16ad5a) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:51:46.992497 systemd-modules-load[277]: Inserted module 'overlay' Mar 17 18:51:47.047775 systemd[1]: Started systemd-journald.service. Mar 17 18:51:47.047802 kernel: Bridge firewalling registered Mar 17 18:51:47.045591 systemd-resolved[278]: Positive Trust Anchors: Mar 17 18:51:47.078755 kernel: audit: type=1130 audit(1742237507.052:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.045601 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:51:47.045628 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:51:47.165648 kernel: SCSI subsystem initialized Mar 17 18:51:47.165671 kernel: audit: type=1130 audit(1742237507.125:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.165681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:51:47.165691 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:51:47.165699 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:51:47.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.052055 systemd-modules-load[277]: Inserted module 'br_netfilter' Mar 17 18:51:47.192512 kernel: audit: type=1130 audit(1742237507.169:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.052227 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 18:51:47.072975 systemd[1]: Started systemd-resolved.service. Mar 17 18:51:47.220849 kernel: audit: type=1130 audit(1742237507.192:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.142394 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:51:47.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.168202 systemd-modules-load[277]: Inserted module 'dm_multipath' Mar 17 18:51:47.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.169810 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:51:47.285102 kernel: audit: type=1130 audit(1742237507.220:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.285124 kernel: audit: type=1130 audit(1742237507.225:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.192742 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:51:47.220890 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:51:47.226121 systemd[1]: Reached target nss-lookup.target. Mar 17 18:51:47.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.258678 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:51:47.366914 kernel: audit: type=1130 audit(1742237507.320:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.366950 kernel: audit: type=1130 audit(1742237507.341:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.284509 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:51:47.299227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:51:47.394611 kernel: audit: type=1130 audit(1742237507.366:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.306041 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:51:47.324080 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:51:47.342358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:51:47.368005 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:51:47.413523 dracut-cmdline[298]: dracut-dracut-053 Mar 17 18:51:47.413523 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:51:47.506413 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:51:47.521414 kernel: iscsi: registered transport (tcp) Mar 17 18:51:47.542210 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:51:47.542270 kernel: QLogic iSCSI HBA Driver Mar 17 18:51:47.576860 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:51:47.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:47.582846 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:51:47.636402 kernel: raid6: neonx8 gen() 13823 MB/s Mar 17 18:51:47.656392 kernel: raid6: neonx8 xor() 10845 MB/s Mar 17 18:51:47.677391 kernel: raid6: neonx4 gen() 13571 MB/s Mar 17 18:51:47.698394 kernel: raid6: neonx4 xor() 11309 MB/s Mar 17 18:51:47.718397 kernel: raid6: neonx2 gen() 13013 MB/s Mar 17 18:51:47.738396 kernel: raid6: neonx2 xor() 10300 MB/s Mar 17 18:51:47.759394 kernel: raid6: neonx1 gen() 10548 MB/s Mar 17 18:51:47.779396 kernel: raid6: neonx1 xor() 8774 MB/s Mar 17 18:51:47.799392 kernel: raid6: int64x8 gen() 6278 MB/s Mar 17 18:51:47.820397 kernel: raid6: int64x8 xor() 3544 MB/s Mar 17 18:51:47.840392 kernel: raid6: int64x4 gen() 7226 MB/s Mar 17 18:51:47.860396 kernel: raid6: int64x4 xor() 3858 MB/s Mar 17 18:51:47.881393 kernel: raid6: int64x2 gen() 6150 MB/s Mar 17 18:51:47.901395 kernel: raid6: int64x2 xor() 3322 MB/s Mar 17 18:51:47.921397 kernel: raid6: int64x1 gen() 5049 MB/s Mar 17 18:51:47.946557 kernel: raid6: int64x1 xor() 2644 MB/s Mar 17 18:51:47.946577 kernel: raid6: using algorithm neonx8 gen() 13823 MB/s Mar 17 18:51:47.946594 kernel: raid6: .... xor() 10845 MB/s, rmw enabled Mar 17 18:51:47.950602 kernel: raid6: using neon recovery algorithm Mar 17 18:51:47.971664 kernel: xor: measuring software checksum speed Mar 17 18:51:47.971675 kernel: 8regs : 17173 MB/sec Mar 17 18:51:47.975794 kernel: 32regs : 20707 MB/sec Mar 17 18:51:47.979521 kernel: arm64_neon : 27589 MB/sec Mar 17 18:51:47.979539 kernel: xor: using function: arm64_neon (27589 MB/sec) Mar 17 18:51:48.040407 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:51:48.050102 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:51:48.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:48.058000 audit: BPF prog-id=7 op=LOAD Mar 17 18:51:48.059000 audit: BPF prog-id=8 op=LOAD Mar 17 18:51:48.059965 systemd[1]: Starting systemd-udevd.service... Mar 17 18:51:48.074858 systemd-udevd[475]: Using default interface naming scheme 'v252'. Mar 17 18:51:48.079983 systemd[1]: Started systemd-udevd.service. Mar 17 18:51:48.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:48.090765 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:51:48.104338 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Mar 17 18:51:48.139158 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:51:48.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:48.144517 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:51:48.182738 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:51:48.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:48.241401 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 18:51:48.244415 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:51:48.244453 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:51:48.265603 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 17 18:51:48.265653 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:51:48.265663 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:51:48.284984 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 17 18:51:48.295405 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:51:48.305399 kernel: scsi host0: storvsc_host_t Mar 17 18:51:48.305578 kernel: scsi host1: storvsc_host_t Mar 17 18:51:48.305658 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:51:48.320475 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:51:48.340039 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 18:51:48.351054 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:51:48.351067 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:51:48.379206 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:51:48.379314 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:51:48.379413 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:51:48.379499 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:51:48.379575 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:51:48.379658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:51:48.379668 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:51:48.379749 kernel: hv_netvsc 000d3a06-d517-000d-3a06-d517000d3a06 eth0: VF slot 1 added Mar 17 18:51:48.389410 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:51:48.397401 kernel: hv_pci 460eb8a1-2591-4ad6-ad23-1847acb45070: PCI VMBus probing: Using version 0x10004 Mar 17 18:51:48.498701 kernel: hv_pci 460eb8a1-2591-4ad6-ad23-1847acb45070: PCI host bridge to bus 2591:00 Mar 17 18:51:48.498814 kernel: pci_bus 2591:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 18:51:48.498903 kernel: pci_bus 2591:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:51:48.498972 kernel: pci 2591:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 18:51:48.499060 kernel: pci 2591:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:51:48.499135 kernel: pci 2591:00:02.0: enabling Extended Tags Mar 17 18:51:48.499210 kernel: pci 2591:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2591:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 18:51:48.499286 kernel: pci_bus 2591:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:51:48.499354 kernel: pci 2591:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:51:48.537416 kernel: mlx5_core 2591:00:02.0: firmware version: 16.30.1284 Mar 17 18:51:48.779308 kernel: mlx5_core 2591:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Mar 17 18:51:48.779440 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (538) Mar 17 18:51:48.779450 kernel: hv_netvsc 000d3a06-d517-000d-3a06-d517000d3a06 eth0: VF registering: eth1 Mar 17 18:51:48.779532 kernel: mlx5_core 2591:00:02.0 eth1: joined to eth0 Mar 17 18:51:48.729432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:51:48.776962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:51:48.798412 kernel: mlx5_core 2591:00:02.0 enP9617s1: renamed from eth1 Mar 17 18:51:48.910390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:51:48.932355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:51:48.938435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:51:48.952665 systemd[1]: Starting disk-uuid.service... Mar 17 18:51:48.978397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:51:48.987415 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:51:49.995694 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:51:49.995743 disk-uuid[604]: The operation has completed successfully. Mar 17 18:51:50.050941 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:51:50.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.051031 systemd[1]: Finished disk-uuid.service. Mar 17 18:51:50.064592 systemd[1]: Starting verity-setup.service... Mar 17 18:51:50.104375 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:51:50.420472 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:51:50.425984 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:51:50.434981 systemd[1]: Finished verity-setup.service. Mar 17 18:51:50.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.491406 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:51:50.492026 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:51:50.495945 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:51:50.496717 systemd[1]: Starting ignition-setup.service... Mar 17 18:51:50.503612 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:51:50.540955 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:51:50.541008 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:51:50.545490 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:51:50.592670 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:51:50.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.601000 audit: BPF prog-id=9 op=LOAD Mar 17 18:51:50.602165 systemd[1]: Starting systemd-networkd.service... Mar 17 18:51:50.618378 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:51:50.625938 systemd-networkd[848]: lo: Link UP Mar 17 18:51:50.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.625950 systemd-networkd[848]: lo: Gained carrier Mar 17 18:51:50.626716 systemd-networkd[848]: Enumeration completed Mar 17 18:51:50.629285 systemd[1]: Started systemd-networkd.service. Mar 17 18:51:50.629879 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:51:50.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.634300 systemd[1]: Reached target network.target. Mar 17 18:51:50.668901 iscsid[857]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:51:50.668901 iscsid[857]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:51:50.668901 iscsid[857]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:51:50.668901 iscsid[857]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:51:50.668901 iscsid[857]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:51:50.668901 iscsid[857]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:51:50.668901 iscsid[857]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:51:50.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.639271 systemd[1]: Starting iscsiuio.service... Mar 17 18:51:50.650014 systemd[1]: Started iscsiuio.service. Mar 17 18:51:50.664284 systemd[1]: Starting iscsid.service... Mar 17 18:51:50.672559 systemd[1]: Started iscsid.service. Mar 17 18:51:50.700976 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:51:50.719177 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:51:50.728860 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:51:50.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.739498 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:51:50.751458 systemd[1]: Reached target remote-fs.target. Mar 17 18:51:50.763601 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:51:50.787193 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:51:50.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:50.813699 systemd[1]: Finished ignition-setup.service. Mar 17 18:51:50.823020 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:51:50.859402 kernel: mlx5_core 2591:00:02.0 enP9617s1: Link up Mar 17 18:51:50.901611 kernel: hv_netvsc 000d3a06-d517-000d-3a06-d517000d3a06 eth0: Data path switched to VF: enP9617s1 Mar 17 18:51:50.901788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:51:50.901546 systemd-networkd[848]: enP9617s1: Link UP Mar 17 18:51:50.901676 systemd-networkd[848]: eth0: Link UP Mar 17 18:51:50.901797 systemd-networkd[848]: eth0: Gained carrier Mar 17 18:51:50.913554 systemd-networkd[848]: enP9617s1: Gained carrier Mar 17 18:51:50.924447 systemd-networkd[848]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:51:52.351506 systemd-networkd[848]: eth0: Gained IPv6LL Mar 17 18:51:53.080685 ignition[872]: Ignition 2.14.0 Mar 17 18:51:53.080696 ignition[872]: Stage: fetch-offline Mar 17 18:51:53.080749 ignition[872]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:53.080773 ignition[872]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:53.172690 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:53.172841 ignition[872]: parsed url from cmdline: "" Mar 17 18:51:53.179492 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:51:53.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.172845 ignition[872]: no config URL provided Mar 17 18:51:53.216917 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:51:53.216941 kernel: audit: type=1130 audit(1742237513.185:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.172851 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:51:53.195682 systemd[1]: Starting ignition-fetch.service... Mar 17 18:51:53.172859 ignition[872]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:51:53.172864 ignition[872]: failed to fetch config: resource requires networking Mar 17 18:51:53.172973 ignition[872]: Ignition finished successfully Mar 17 18:51:53.203122 ignition[880]: Ignition 2.14.0 Mar 17 18:51:53.203128 ignition[880]: Stage: fetch Mar 17 18:51:53.203229 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:53.203246 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:53.205806 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:53.205924 ignition[880]: parsed url from cmdline: "" Mar 17 18:51:53.205927 ignition[880]: no config URL provided Mar 17 18:51:53.205932 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:51:53.205939 ignition[880]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:51:53.205966 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:51:53.313953 ignition[880]: GET result: OK Mar 17 18:51:53.314013 ignition[880]: config has been read from IMDS userdata Mar 17 18:51:53.316534 unknown[880]: fetched base config from "system" Mar 17 18:51:53.347448 kernel: audit: type=1130 audit(1742237513.325:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.314031 ignition[880]: parsing config with SHA512: 293bf0a904d30fb6660b95cb080aa41cdcd8587f901f5cf85254ee89dada9a536d3a8bdcce180f44d605093dc189fa4b662754edfe0fe5b2cfa8ebec2af49946 Mar 17 18:51:53.316541 unknown[880]: fetched base config from "system" Mar 17 18:51:53.316898 ignition[880]: fetch: fetch complete Mar 17 18:51:53.316546 unknown[880]: fetched user config from "azure" Mar 17 18:51:53.316902 ignition[880]: fetch: fetch passed Mar 17 18:51:53.318044 systemd[1]: Finished ignition-fetch.service. Mar 17 18:51:53.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.316943 ignition[880]: Ignition finished successfully Mar 17 18:51:53.326576 systemd[1]: Starting ignition-kargs.service... Mar 17 18:51:53.422492 kernel: audit: type=1130 audit(1742237513.368:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.422515 kernel: audit: type=1130 audit(1742237513.405:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.354941 ignition[887]: Ignition 2.14.0 Mar 17 18:51:53.363929 systemd[1]: Finished ignition-kargs.service. Mar 17 18:51:53.354947 ignition[887]: Stage: kargs Mar 17 18:51:53.388368 systemd[1]: Starting ignition-disks.service... Mar 17 18:51:53.355066 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:53.400836 systemd[1]: Finished ignition-disks.service. Mar 17 18:51:53.355090 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:53.405496 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:51:53.358862 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:53.427435 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:51:53.361038 ignition[887]: kargs: kargs passed Mar 17 18:51:53.434980 systemd[1]: Reached target local-fs.target. Mar 17 18:51:53.361106 ignition[887]: Ignition finished successfully Mar 17 18:51:53.443436 systemd[1]: Reached target sysinit.target. Mar 17 18:51:53.395181 ignition[893]: Ignition 2.14.0 Mar 17 18:51:53.450460 systemd[1]: Reached target basic.target. Mar 17 18:51:53.395188 ignition[893]: Stage: disks Mar 17 18:51:53.459629 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:51:53.395291 ignition[893]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:53.395312 ignition[893]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:53.524324 systemd-fsck[901]: ROOT: clean, 623/7326000 files, 481077/7359488 blocks Mar 17 18:51:53.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.398172 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:53.557499 kernel: audit: type=1130 audit(1742237513.534:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:53.530463 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:51:53.399877 ignition[893]: disks: disks passed Mar 17 18:51:53.556508 systemd[1]: Mounting sysroot.mount... Mar 17 18:51:53.399930 ignition[893]: Ignition finished successfully Mar 17 18:51:53.586399 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:51:53.587522 systemd[1]: Mounted sysroot.mount. Mar 17 18:51:53.591449 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:51:53.636832 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:51:53.641632 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:51:53.653723 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:51:53.653767 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:51:53.668917 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:51:53.716850 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:51:53.722059 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:51:53.750754 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Mar 17 18:51:53.750814 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:51:53.750961 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:51:53.762477 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:51:53.767066 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:51:53.771061 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:51:53.781941 initrd-setup-root[942]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:51:53.804949 initrd-setup-root[950]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:51:53.813848 initrd-setup-root[958]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:51:54.194641 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:51:54.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.221440 systemd[1]: Starting ignition-mount.service... Mar 17 18:51:54.234898 kernel: audit: type=1130 audit(1742237514.199:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.235554 systemd[1]: Starting sysroot-boot.service... Mar 17 18:51:54.240142 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:51:54.240250 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:51:54.268001 ignition[978]: INFO : Ignition 2.14.0 Mar 17 18:51:54.268001 ignition[978]: INFO : Stage: mount Mar 17 18:51:54.276154 ignition[978]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:54.276154 ignition[978]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:54.276154 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:54.276154 ignition[978]: INFO : mount: mount passed Mar 17 18:51:54.276154 ignition[978]: INFO : Ignition finished successfully Mar 17 18:51:54.342434 kernel: audit: type=1130 audit(1742237514.287:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.342457 kernel: audit: type=1130 audit(1742237514.332:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.276306 systemd[1]: Finished ignition-mount.service. Mar 17 18:51:54.328049 systemd[1]: Finished sysroot-boot.service. Mar 17 18:51:54.751175 coreos-metadata[910]: Mar 17 18:51:54.751 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:51:54.760868 coreos-metadata[910]: Mar 17 18:51:54.760 INFO Fetch successful Mar 17 18:51:54.795846 coreos-metadata[910]: Mar 17 18:51:54.795 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:51:54.807312 coreos-metadata[910]: Mar 17 18:51:54.807 INFO Fetch successful Mar 17 18:51:54.820563 coreos-metadata[910]: Mar 17 18:51:54.820 INFO wrote hostname ci-3510.3.7-a-95dfbd75e4 to /sysroot/etc/hostname Mar 17 18:51:54.830212 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:51:54.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.837154 systemd[1]: Starting ignition-files.service... Mar 17 18:51:54.864951 kernel: audit: type=1130 audit(1742237514.835:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:54.866339 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:51:54.881398 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (989) Mar 17 18:51:54.892906 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:51:54.892919 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:51:54.892934 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:51:54.901936 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:51:54.918315 ignition[1008]: INFO : Ignition 2.14.0 Mar 17 18:51:54.922474 ignition[1008]: INFO : Stage: files Mar 17 18:51:54.926106 ignition[1008]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:54.926106 ignition[1008]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:54.946305 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:54.946305 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:51:54.946305 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:51:54.946305 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:51:54.989991 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3177440357" Mar 17 18:51:54.997735 ignition[1008]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3177440357": device or resource busy Mar 17 18:51:54.997735 ignition[1008]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3177440357", trying btrfs: device or resource busy Mar 17 18:51:54.997735 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3177440357" Mar 17 18:51:54.990613 unknown[1008]: wrote ssh authorized keys file for user: core Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3177440357" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem3177440357" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem3177440357" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3921413161" Mar 17 18:51:55.153769 ignition[1008]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3921413161": device or resource busy Mar 17 18:51:55.153769 ignition[1008]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3921413161", trying btrfs: device or resource busy Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3921413161" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3921413161" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3921413161" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3921413161" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:51:55.153769 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:51:55.002886 systemd[1]: mnt-oem3177440357.mount: Deactivated successfully. Mar 17 18:51:55.316839 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:51:55.478166 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Mar 17 18:51:55.715666 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:51:55.715666 ignition[1008]: INFO : files: op(f): [started] processing unit "waagent.service" Mar 17 18:51:55.715666 ignition[1008]: INFO : files: op(f): [finished] processing unit "waagent.service" Mar 17 18:51:55.715666 ignition[1008]: INFO : files: op(10): [started] processing unit "nvidia.service" Mar 17 18:51:55.772247 kernel: audit: type=1130 audit(1742237515.739:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.728659 systemd[1]: Finished ignition-files.service. Mar 17 18:51:55.777349 ignition[1008]: INFO : files: op(10): [finished] processing unit "nvidia.service" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:51:55.777349 ignition[1008]: INFO : files: files passed Mar 17 18:51:55.777349 ignition[1008]: INFO : Ignition finished successfully Mar 17 18:51:55.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.742348 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:51:55.885076 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:51:55.765413 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:51:55.773331 systemd[1]: Starting ignition-quench.service... Mar 17 18:51:55.789014 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:51:55.789098 systemd[1]: Finished ignition-quench.service. Mar 17 18:51:55.793797 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:51:55.806434 systemd[1]: Reached target ignition-complete.target. Mar 17 18:51:55.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.825460 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:51:55.848925 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:51:55.849026 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:51:55.862698 systemd[1]: Reached target initrd-fs.target. Mar 17 18:51:55.871002 systemd[1]: Reached target initrd.target. Mar 17 18:51:55.880003 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:51:55.880914 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:51:55.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:55.934820 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:51:55.940316 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:51:55.956964 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:51:55.966253 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:51:55.975084 systemd[1]: Stopped target timers.target. Mar 17 18:51:55.983161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:51:55.983315 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:51:55.994548 systemd[1]: Stopped target initrd.target. Mar 17 18:51:56.002729 systemd[1]: Stopped target basic.target. Mar 17 18:51:56.010576 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:51:56.021077 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:51:56.031146 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:51:56.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.040961 systemd[1]: Stopped target remote-fs.target. Mar 17 18:51:56.049203 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:51:56.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.059071 systemd[1]: Stopped target sysinit.target. Mar 17 18:51:56.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.067661 systemd[1]: Stopped target local-fs.target. Mar 17 18:51:56.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.075549 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:51:56.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.083716 systemd[1]: Stopped target swap.target. Mar 17 18:51:56.092123 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:51:56.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.176681 iscsid[857]: iscsid shutting down. Mar 17 18:51:56.092276 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:51:56.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.196502 ignition[1046]: INFO : Ignition 2.14.0 Mar 17 18:51:56.196502 ignition[1046]: INFO : Stage: umount Mar 17 18:51:56.196502 ignition[1046]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:51:56.196502 ignition[1046]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:51:56.196502 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:51:56.196502 ignition[1046]: INFO : umount: umount passed Mar 17 18:51:56.196502 ignition[1046]: INFO : Ignition finished successfully Mar 17 18:51:56.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.101515 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:51:56.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.110222 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:51:56.110366 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:51:56.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.118790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:51:56.118928 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:51:56.128216 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:51:56.128342 systemd[1]: Stopped ignition-files.service. Mar 17 18:51:56.136152 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:51:56.136285 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:51:56.146488 systemd[1]: Stopping ignition-mount.service... Mar 17 18:51:56.155911 systemd[1]: Stopping iscsid.service... Mar 17 18:51:56.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.165792 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:51:56.166033 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:51:56.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.171825 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:51:56.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.187243 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:51:56.187466 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:51:56.192509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:51:56.192651 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:51:56.203079 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:51:56.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.203203 systemd[1]: Stopped iscsid.service. Mar 17 18:51:56.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.210600 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:51:56.210753 systemd[1]: Stopped ignition-mount.service. Mar 17 18:51:56.454000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:51:56.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.226065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:51:56.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.228833 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:51:56.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.228901 systemd[1]: Stopped ignition-disks.service. Mar 17 18:51:56.245863 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:51:56.245907 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:51:56.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.263281 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:51:56.263326 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:51:56.272961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:51:56.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.273001 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:51:56.561297 kernel: hv_netvsc 000d3a06-d517-000d-3a06-d517000d3a06 eth0: Data path switched from VF: enP9617s1 Mar 17 18:51:56.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.288254 systemd[1]: Stopped target paths.target. Mar 17 18:51:56.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.297200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:51:56.309848 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:51:56.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.320859 systemd[1]: Stopped target slices.target. Mar 17 18:51:56.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.325111 systemd[1]: Stopped target sockets.target. Mar 17 18:51:56.333302 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:51:56.333351 systemd[1]: Closed iscsid.socket. Mar 17 18:51:56.342693 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:51:56.342736 systemd[1]: Stopped ignition-setup.service. Mar 17 18:51:56.356286 systemd[1]: Stopping iscsiuio.service... Mar 17 18:51:56.366043 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:51:56.366145 systemd[1]: Stopped iscsiuio.service. Mar 17 18:51:56.375052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:51:56.375138 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:51:56.383613 systemd[1]: Stopped target network.target. Mar 17 18:51:56.391709 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:51:56.391749 systemd[1]: Closed iscsiuio.socket. Mar 17 18:51:56.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.403311 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:51:56.411895 systemd-networkd[848]: eth0: DHCPv6 lease lost Mar 17 18:51:56.674000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:51:56.413309 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:51:56.422857 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:51:56.422946 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:51:56.432664 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:51:56.432752 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:51:56.442074 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:51:56.442112 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:51:56.450145 systemd[1]: Stopping network-cleanup.service... Mar 17 18:51:56.458989 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:51:56.459050 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:51:56.464305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:51:56.464372 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:51:56.477807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:51:56.477853 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:51:56.482784 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:51:56.496963 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:51:56.506149 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:51:56.506300 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:51:56.511409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:51:56.511453 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:51:56.519603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:51:56.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.519639 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:51:56.529046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:51:56.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:56.529097 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:51:56.537537 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:51:56.537577 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:51:56.555462 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:51:56.555514 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:51:56.565717 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:51:56.579830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:51:56.579892 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:51:56.585352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:51:56.585708 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:51:56.657619 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:51:56.657727 systemd[1]: Stopped network-cleanup.service. Mar 17 18:51:56.771379 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:51:56.771517 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:51:56.780608 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:51:56.791653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:51:56.791727 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:51:56.801494 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:51:56.869397 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Mar 17 18:51:56.819761 systemd[1]: Switching root. Mar 17 18:51:56.869459 systemd-journald[276]: Journal stopped Mar 17 18:52:07.304458 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:52:07.304482 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:52:07.304492 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:52:07.304503 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:52:07.304512 kernel: SELinux: policy capability open_perms=1 Mar 17 18:52:07.304520 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:52:07.304529 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:52:07.304537 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:52:07.304545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:52:07.304553 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:52:07.304560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:52:07.304571 kernel: kauditd_printk_skb: 42 callbacks suppressed Mar 17 18:52:07.304580 kernel: audit: type=1403 audit(1742237518.769:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:52:07.304590 systemd[1]: Successfully loaded SELinux policy in 262.422ms. Mar 17 18:52:07.304601 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.324ms. Mar 17 18:52:07.304612 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:52:07.304622 systemd[1]: Detected virtualization microsoft. Mar 17 18:52:07.304631 systemd[1]: Detected architecture arm64. Mar 17 18:52:07.304642 systemd[1]: Detected first boot. Mar 17 18:52:07.304651 systemd[1]: Hostname set to . Mar 17 18:52:07.304661 systemd[1]: Initializing machine ID from random generator. Mar 17 18:52:07.304670 kernel: audit: type=1400 audit(1742237519.416:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:52:07.304681 kernel: audit: type=1400 audit(1742237519.416:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:52:07.304689 kernel: audit: type=1334 audit(1742237519.432:84): prog-id=10 op=LOAD Mar 17 18:52:07.304698 kernel: audit: type=1334 audit(1742237519.432:85): prog-id=10 op=UNLOAD Mar 17 18:52:07.304706 kernel: audit: type=1334 audit(1742237519.449:86): prog-id=11 op=LOAD Mar 17 18:52:07.304715 kernel: audit: type=1334 audit(1742237519.449:87): prog-id=11 op=UNLOAD Mar 17 18:52:07.304723 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:52:07.304733 kernel: audit: type=1400 audit(1742237520.586:88): avc: denied { associate } for pid=1079 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:52:07.304744 kernel: audit: type=1300 audit(1742237520.586:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1062 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:07.304753 kernel: audit: type=1327 audit(1742237520.586:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:52:07.304762 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:52:07.304771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:52:07.304781 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:52:07.304791 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:52:07.304801 kernel: kauditd_printk_skb: 6 callbacks suppressed Mar 17 18:52:07.304810 kernel: audit: type=1334 audit(1742237526.571:90): prog-id=12 op=LOAD Mar 17 18:52:07.304819 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:52:07.304828 kernel: audit: type=1334 audit(1742237526.571:91): prog-id=3 op=UNLOAD Mar 17 18:52:07.304838 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:52:07.304849 kernel: audit: type=1334 audit(1742237526.571:92): prog-id=13 op=LOAD Mar 17 18:52:07.304859 kernel: audit: type=1334 audit(1742237526.571:93): prog-id=14 op=LOAD Mar 17 18:52:07.304867 kernel: audit: type=1334 audit(1742237526.571:94): prog-id=4 op=UNLOAD Mar 17 18:52:07.304878 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:52:07.304887 kernel: audit: type=1334 audit(1742237526.571:95): prog-id=5 op=UNLOAD Mar 17 18:52:07.304896 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:52:07.304906 kernel: audit: type=1131 audit(1742237526.572:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.304914 kernel: audit: type=1334 audit(1742237526.600:97): prog-id=12 op=UNLOAD Mar 17 18:52:07.304924 kernel: audit: type=1130 audit(1742237526.622:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.304933 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:52:07.304943 kernel: audit: type=1131 audit(1742237526.622:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.304953 systemd[1]: Created slice system-getty.slice. Mar 17 18:52:07.304963 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:52:07.304972 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:52:07.304983 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:52:07.304992 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:52:07.305001 systemd[1]: Created slice user.slice. Mar 17 18:52:07.305011 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:52:07.305020 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:52:07.305031 systemd[1]: Set up automount boot.automount. Mar 17 18:52:07.305041 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:52:07.305050 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:52:07.305060 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:52:07.305069 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:52:07.305078 systemd[1]: Reached target integritysetup.target. Mar 17 18:52:07.305088 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:52:07.305097 systemd[1]: Reached target remote-fs.target. Mar 17 18:52:07.305108 systemd[1]: Reached target slices.target. Mar 17 18:52:07.305117 systemd[1]: Reached target swap.target. Mar 17 18:52:07.305127 systemd[1]: Reached target torcx.target. Mar 17 18:52:07.305136 systemd[1]: Reached target veritysetup.target. Mar 17 18:52:07.305145 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:52:07.305155 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:52:07.305165 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:52:07.305176 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:52:07.305185 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:52:07.305195 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:52:07.305204 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:52:07.305213 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:52:07.305223 systemd[1]: Mounting media.mount... Mar 17 18:52:07.305233 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:52:07.305244 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:52:07.305254 systemd[1]: Mounting tmp.mount... Mar 17 18:52:07.305263 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:52:07.305273 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:52:07.305282 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:52:07.305292 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:52:07.305301 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:52:07.305310 systemd[1]: Starting modprobe@drm.service... Mar 17 18:52:07.305321 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:52:07.305330 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:52:07.305339 systemd[1]: Starting modprobe@loop.service... Mar 17 18:52:07.305349 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:52:07.305359 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:52:07.305368 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:52:07.305378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:52:07.305399 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:52:07.305408 systemd[1]: Stopped systemd-journald.service. Mar 17 18:52:07.305419 systemd[1]: systemd-journald.service: Consumed 2.822s CPU time. Mar 17 18:52:07.305429 systemd[1]: Starting systemd-journald.service... Mar 17 18:52:07.305438 kernel: loop: module loaded Mar 17 18:52:07.305447 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:52:07.305457 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:52:07.305466 kernel: fuse: init (API version 7.34) Mar 17 18:52:07.305475 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:52:07.305485 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:52:07.305494 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:52:07.305505 systemd[1]: Stopped verity-setup.service. Mar 17 18:52:07.305514 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:52:07.305524 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:52:07.305533 systemd[1]: Mounted media.mount. Mar 17 18:52:07.305547 systemd-journald[1157]: Journal started Mar 17 18:52:07.305590 systemd-journald[1157]: Runtime Journal (/run/log/journal/26b17c75e45e496eab76207fc5603381) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:51:58.769000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:51:59.416000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:51:59.416000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:51:59.432000 audit: BPF prog-id=10 op=LOAD Mar 17 18:51:59.432000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:51:59.449000 audit: BPF prog-id=11 op=LOAD Mar 17 18:51:59.449000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:52:00.586000 audit[1079]: AVC avc: denied { associate } for pid=1079 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:52:00.586000 audit[1079]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1062 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:00.586000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:52:00.596000 audit[1079]: AVC avc: denied { associate } for pid=1079 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:52:00.596000 audit[1079]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1062 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:00.596000 audit: CWD cwd="/" Mar 17 18:52:00.596000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:00.596000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:00.596000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:52:06.571000 audit: BPF prog-id=12 op=LOAD Mar 17 18:52:06.571000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:52:06.571000 audit: BPF prog-id=13 op=LOAD Mar 17 18:52:06.571000 audit: BPF prog-id=14 op=LOAD Mar 17 18:52:06.571000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:52:06.571000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:52:06.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:06.600000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:52:06.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:06.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.174000 audit: BPF prog-id=15 op=LOAD Mar 17 18:52:07.174000 audit: BPF prog-id=16 op=LOAD Mar 17 18:52:07.174000 audit: BPF prog-id=17 op=LOAD Mar 17 18:52:07.174000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:52:07.174000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:52:07.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.302000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:52:07.302000 audit[1157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc7cd9080 a2=4000 a3=1 items=0 ppid=1 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:07.302000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:52:06.569829 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:52:00.540351 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:52:06.569841 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:52:00.572827 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:52:06.572460 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:52:00.572865 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:52:06.572785 systemd[1]: systemd-journald.service: Consumed 2.822s CPU time. Mar 17 18:52:00.572917 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:52:00.572927 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:52:00.572971 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:52:00.572985 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:52:00.573193 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:52:00.573230 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:52:00.573242 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:52:00.573671 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:52:00.573706 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:52:00.573724 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:52:00.573738 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:52:00.573753 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:52:00.573767 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:52:05.409080 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:52:05.409346 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:52:05.409479 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:52:05.409646 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:52:05.409701 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:52:05.409757 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-03-17T18:52:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:52:07.319078 systemd[1]: Started systemd-journald.service. Mar 17 18:52:07.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.319969 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:52:07.324398 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:52:07.329007 systemd[1]: Mounted tmp.mount. Mar 17 18:52:07.332732 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:52:07.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.337559 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:52:07.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.342551 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:52:07.342678 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:52:07.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.347392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:52:07.347515 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:52:07.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.352053 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:52:07.352194 systemd[1]: Finished modprobe@drm.service. Mar 17 18:52:07.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.356749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:52:07.356886 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:52:07.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.361803 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:52:07.361937 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:52:07.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.366357 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:52:07.366500 systemd[1]: Finished modprobe@loop.service. Mar 17 18:52:07.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.371072 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:52:07.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.376413 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:52:07.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.381289 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:52:07.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.386318 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:52:07.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.391406 systemd[1]: Reached target network-pre.target. Mar 17 18:52:07.397032 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:52:07.402209 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:52:07.406123 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:52:07.426055 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:52:07.431485 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:52:07.435929 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:52:07.437084 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:52:07.441514 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:52:07.442631 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:52:07.447539 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:52:07.453507 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:52:07.461181 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:52:07.466543 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:52:07.473428 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:52:07.489683 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:52:07.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.494378 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:52:07.507055 systemd-journald[1157]: Time spent on flushing to /var/log/journal/26b17c75e45e496eab76207fc5603381 is 13.518ms for 1070 entries. Mar 17 18:52:07.507055 systemd-journald[1157]: System Journal (/var/log/journal/26b17c75e45e496eab76207fc5603381) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:52:07.578709 systemd-journald[1157]: Received client request to flush runtime journal. Mar 17 18:52:07.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:07.541893 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:52:07.579743 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:52:07.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.014314 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:52:08.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.543173 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:52:08.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.548000 audit: BPF prog-id=18 op=LOAD Mar 17 18:52:08.548000 audit: BPF prog-id=19 op=LOAD Mar 17 18:52:08.548000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:52:08.548000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:52:08.549291 systemd[1]: Starting systemd-udevd.service... Mar 17 18:52:08.567094 systemd-udevd[1202]: Using default interface naming scheme 'v252'. Mar 17 18:52:08.858935 systemd[1]: Started systemd-udevd.service. Mar 17 18:52:08.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.871000 audit: BPF prog-id=20 op=LOAD Mar 17 18:52:08.872335 systemd[1]: Starting systemd-networkd.service... Mar 17 18:52:08.903041 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:52:08.909000 audit: BPF prog-id=21 op=LOAD Mar 17 18:52:08.909000 audit: BPF prog-id=22 op=LOAD Mar 17 18:52:08.909000 audit: BPF prog-id=23 op=LOAD Mar 17 18:52:08.910460 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:52:08.959365 systemd[1]: Started systemd-userdbd.service. Mar 17 18:52:08.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.966412 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:52:08.982000 audit[1214]: AVC avc: denied { confidentiality } for pid=1214 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:52:08.999250 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:52:08.999367 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:52:08.999402 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 18:52:08.982000 audit[1214]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadc193ce0 a1=aa2c a2=ffffaa9324b0 a3=aaaadc0f0010 items=12 ppid=1202 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:08.982000 audit: CWD cwd="/" Mar 17 18:52:08.982000 audit: PATH item=0 name=(null) inode=5883 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=1 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=2 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:09.007581 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:52:08.982000 audit: PATH item=3 name=(null) inode=9856 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=4 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=5 name=(null) inode=9857 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:09.020440 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:52:09.020475 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:52:08.982000 audit: PATH item=6 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=7 name=(null) inode=9858 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=8 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=9 name=(null) inode=9859 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=10 name=(null) inode=9855 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PATH item=11 name=(null) inode=9860 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:08.982000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:52:09.028342 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:52:09.038853 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:52:09.069872 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:52:09.069985 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:52:09.071116 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:52:09.076711 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:52:09.076789 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:52:08.830226 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:52:08.909359 systemd-journald[1157]: Time jumped backwards, rotating. Mar 17 18:52:08.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:08.842133 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:52:08.848132 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:52:08.898228 systemd-networkd[1223]: lo: Link UP Mar 17 18:52:08.898232 systemd-networkd[1223]: lo: Gained carrier Mar 17 18:52:08.898609 systemd-networkd[1223]: Enumeration completed Mar 17 18:52:08.898735 systemd[1]: Started systemd-networkd.service. Mar 17 18:52:08.905476 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:52:08.912623 systemd-networkd[1223]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:52:08.961834 kernel: mlx5_core 2591:00:02.0 enP9617s1: Link up Mar 17 18:52:08.987838 kernel: hv_netvsc 000d3a06-d517-000d-3a06-d517000d3a06 eth0: Data path switched to VF: enP9617s1 Mar 17 18:52:08.988270 systemd-networkd[1223]: enP9617s1: Link UP Mar 17 18:52:08.988353 systemd-networkd[1223]: eth0: Link UP Mar 17 18:52:08.988357 systemd-networkd[1223]: eth0: Gained carrier Mar 17 18:52:08.992994 systemd-networkd[1223]: enP9617s1: Gained carrier Mar 17 18:52:09.003888 systemd-networkd[1223]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:52:09.096549 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:52:09.139573 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:52:09.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.144778 systemd[1]: Reached target cryptsetup.target. Mar 17 18:52:09.150547 systemd[1]: Starting lvm2-activation.service... Mar 17 18:52:09.154551 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:52:09.176637 systemd[1]: Finished lvm2-activation.service. Mar 17 18:52:09.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.182011 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:52:09.186899 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:52:09.186927 systemd[1]: Reached target local-fs.target. Mar 17 18:52:09.191095 systemd[1]: Reached target machines.target. Mar 17 18:52:09.197149 systemd[1]: Starting ldconfig.service... Mar 17 18:52:09.201092 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.201165 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:09.202464 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:52:09.207769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:52:09.214441 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:52:09.220136 systemd[1]: Starting systemd-sysext.service... Mar 17 18:52:09.236050 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1283 (bootctl) Mar 17 18:52:09.237270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:52:09.271016 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:52:09.278441 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:52:09.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.290071 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:52:09.291372 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:52:09.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.317088 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:52:09.317283 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:52:09.359774 kernel: loop0: detected capacity change from 0 to 189592 Mar 17 18:52:09.387769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:52:09.412783 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:52:09.416909 (sd-sysext)[1295]: Using extensions 'kubernetes'. Mar 17 18:52:09.417599 (sd-sysext)[1295]: Merged extensions into '/usr'. Mar 17 18:52:09.435598 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:52:09.439653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.441099 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:52:09.446284 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:52:09.453384 systemd[1]: Starting modprobe@loop.service... Mar 17 18:52:09.458538 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.458661 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:09.459079 systemd-fsck[1291]: fsck.fat 4.2 (2021-01-31) Mar 17 18:52:09.459079 systemd-fsck[1291]: /dev/sda1: 236 files, 117179/258078 clusters Mar 17 18:52:09.461637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:52:09.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.468398 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:52:09.472956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:52:09.473093 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:52:09.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.477851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:52:09.477975 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:52:09.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.483242 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:52:09.483367 systemd[1]: Finished modprobe@loop.service. Mar 17 18:52:09.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.490320 systemd[1]: Finished systemd-sysext.service. Mar 17 18:52:09.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.497034 systemd[1]: Mounting boot.mount... Mar 17 18:52:09.501927 systemd[1]: Starting ensure-sysext.service... Mar 17 18:52:09.505892 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:52:09.505963 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.507089 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:52:09.517877 systemd[1]: Mounted boot.mount. Mar 17 18:52:09.521835 systemd[1]: Reloading. Mar 17 18:52:09.585428 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:52:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:52:09.585771 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:52:09Z" level=info msg="torcx already run" Mar 17 18:52:09.658107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:52:09.658127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:52:09.674990 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:52:09.718719 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:52:09.746000 audit: BPF prog-id=24 op=LOAD Mar 17 18:52:09.746000 audit: BPF prog-id=25 op=LOAD Mar 17 18:52:09.746000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:52:09.746000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:52:09.746000 audit: BPF prog-id=26 op=LOAD Mar 17 18:52:09.746000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:52:09.746000 audit: BPF prog-id=27 op=LOAD Mar 17 18:52:09.746000 audit: BPF prog-id=28 op=LOAD Mar 17 18:52:09.746000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:52:09.746000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:52:09.747000 audit: BPF prog-id=29 op=LOAD Mar 17 18:52:09.747000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:52:09.749000 audit: BPF prog-id=30 op=LOAD Mar 17 18:52:09.749000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:52:09.749000 audit: BPF prog-id=31 op=LOAD Mar 17 18:52:09.749000 audit: BPF prog-id=32 op=LOAD Mar 17 18:52:09.749000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:52:09.749000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:52:09.756060 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:52:09.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.770315 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.771692 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:52:09.777136 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:52:09.783482 systemd[1]: Starting modprobe@loop.service... Mar 17 18:52:09.787241 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.787373 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:09.788228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:52:09.788374 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:52:09.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.796044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:52:09.796185 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:52:09.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.802184 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:52:09.802310 systemd[1]: Finished modprobe@loop.service. Mar 17 18:52:09.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.809086 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.810408 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:52:09.816011 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:52:09.822174 systemd[1]: Starting modprobe@loop.service... Mar 17 18:52:09.826834 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.826968 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:09.827837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:52:09.827981 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:52:09.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.833436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:52:09.833564 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:52:09.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.839997 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:52:09.840125 systemd[1]: Finished modprobe@loop.service. Mar 17 18:52:09.845380 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:52:09.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.848287 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.849550 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:52:09.854948 systemd[1]: Starting modprobe@drm.service... Mar 17 18:52:09.859660 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:52:09.865361 systemd[1]: Starting modprobe@loop.service... Mar 17 18:52:09.869380 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.869441 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:09.870177 systemd[1]: Finished ensure-sysext.service. Mar 17 18:52:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.874612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:52:09.874742 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:52:09.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.879574 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:52:09.879698 systemd[1]: Finished modprobe@drm.service. Mar 17 18:52:09.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.884065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:52:09.884261 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:52:09.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.888978 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:52:09.889094 systemd[1]: Finished modprobe@loop.service. Mar 17 18:52:09.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:09.894681 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:52:09.894727 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:52:09.940221 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:52:10.189946 systemd-networkd[1223]: eth0: Gained IPv6LL Mar 17 18:52:10.194737 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:52:10.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.470253 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:52:10.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.476730 systemd[1]: Starting audit-rules.service... Mar 17 18:52:10.481337 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:52:10.486653 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:52:10.491000 audit: BPF prog-id=33 op=LOAD Mar 17 18:52:10.493452 systemd[1]: Starting systemd-resolved.service... Mar 17 18:52:10.498000 audit: BPF prog-id=34 op=LOAD Mar 17 18:52:10.500648 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:52:10.507514 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:52:10.545000 audit[1403]: SYSTEM_BOOT pid=1403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.548541 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:52:10.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.567051 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:52:10.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.572213 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:52:10.601979 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:52:10.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.606991 systemd[1]: Reached target time-set.target. Mar 17 18:52:10.680732 systemd-resolved[1400]: Positive Trust Anchors: Mar 17 18:52:10.680758 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:52:10.680785 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:52:10.709471 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:52:10.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.746531 systemd-resolved[1400]: Using system hostname 'ci-3510.3.7-a-95dfbd75e4'. Mar 17 18:52:10.748371 systemd[1]: Started systemd-resolved.service. Mar 17 18:52:10.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:52:10.752979 systemd[1]: Reached target network.target. Mar 17 18:52:10.757354 systemd[1]: Reached target network-online.target. Mar 17 18:52:10.762106 systemd[1]: Reached target nss-lookup.target. Mar 17 18:52:10.876000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:52:10.876000 audit[1418]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd05dff40 a2=420 a3=0 items=0 ppid=1397 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:10.876000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:52:10.877243 augenrules[1418]: No rules Mar 17 18:52:10.878089 systemd[1]: Finished audit-rules.service. Mar 17 18:52:15.869137 ldconfig[1282]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:52:15.896342 systemd[1]: Finished ldconfig.service. Mar 17 18:52:15.902695 systemd[1]: Starting systemd-update-done.service... Mar 17 18:52:15.926236 systemd[1]: Finished systemd-update-done.service. Mar 17 18:52:15.931035 systemd[1]: Reached target sysinit.target. Mar 17 18:52:15.935392 systemd[1]: Started motdgen.path. Mar 17 18:52:15.938950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:52:15.944906 systemd[1]: Started logrotate.timer. Mar 17 18:52:15.948854 systemd[1]: Started mdadm.timer. Mar 17 18:52:15.952450 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:52:15.956964 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:52:15.956999 systemd[1]: Reached target paths.target. Mar 17 18:52:15.961004 systemd[1]: Reached target timers.target. Mar 17 18:52:15.965951 systemd[1]: Listening on dbus.socket. Mar 17 18:52:15.970860 systemd[1]: Starting docker.socket... Mar 17 18:52:16.005108 systemd[1]: Listening on sshd.socket. Mar 17 18:52:16.009197 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:16.009682 systemd[1]: Listening on docker.socket. Mar 17 18:52:16.014032 systemd[1]: Reached target sockets.target. Mar 17 18:52:16.018395 systemd[1]: Reached target basic.target. Mar 17 18:52:16.022559 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:52:16.022589 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:52:16.023784 systemd[1]: Starting containerd.service... Mar 17 18:52:16.028450 systemd[1]: Starting dbus.service... Mar 17 18:52:16.032662 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:52:16.037729 systemd[1]: Starting extend-filesystems.service... Mar 17 18:52:16.044563 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:52:16.045787 systemd[1]: Starting kubelet.service... Mar 17 18:52:16.050062 systemd[1]: Starting motdgen.service... Mar 17 18:52:16.054298 systemd[1]: Started nvidia.service. Mar 17 18:52:16.059502 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:52:16.065309 systemd[1]: Starting sshd-keygen.service... Mar 17 18:52:16.071218 systemd[1]: Starting systemd-logind.service... Mar 17 18:52:16.076214 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:52:16.076279 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:52:16.076702 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:52:16.077402 systemd[1]: Starting update-engine.service... Mar 17 18:52:16.082419 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:52:16.091839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:52:16.092022 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:52:16.108775 jq[1428]: false Mar 17 18:52:16.109566 jq[1444]: true Mar 17 18:52:16.121010 extend-filesystems[1429]: Found loop1 Mar 17 18:52:16.121010 extend-filesystems[1429]: Found sda Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda1 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda2 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda3 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found usr Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda4 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda6 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda7 Mar 17 18:52:16.140939 extend-filesystems[1429]: Found sda9 Mar 17 18:52:16.140939 extend-filesystems[1429]: Checking size of /dev/sda9 Mar 17 18:52:16.127564 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:52:16.127739 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:52:16.138065 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:52:16.207199 jq[1454]: true Mar 17 18:52:16.138255 systemd[1]: Finished motdgen.service. Mar 17 18:52:16.149022 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 17 18:52:16.149369 systemd-logind[1438]: New seat seat0. Mar 17 18:52:16.210016 env[1450]: time="2025-03-17T18:52:16.208048200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:52:16.215525 extend-filesystems[1429]: Old size kept for /dev/sda9 Mar 17 18:52:16.215525 extend-filesystems[1429]: Found sr0 Mar 17 18:52:16.220568 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:52:16.220771 systemd[1]: Finished extend-filesystems.service. Mar 17 18:52:16.251853 env[1450]: time="2025-03-17T18:52:16.251802400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:52:16.252217 env[1450]: time="2025-03-17T18:52:16.252197960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.253995 env[1450]: time="2025-03-17T18:52:16.253963200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:52:16.254079 env[1450]: time="2025-03-17T18:52:16.254064960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.254390 env[1450]: time="2025-03-17T18:52:16.254365040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:52:16.254463 env[1450]: time="2025-03-17T18:52:16.254448640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.254520 env[1450]: time="2025-03-17T18:52:16.254506480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:52:16.254585 env[1450]: time="2025-03-17T18:52:16.254572320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.254767 env[1450]: time="2025-03-17T18:52:16.254734560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.255059 env[1450]: time="2025-03-17T18:52:16.255038960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:52:16.255252 env[1450]: time="2025-03-17T18:52:16.255231840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:52:16.255316 env[1450]: time="2025-03-17T18:52:16.255302320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:52:16.255434 env[1450]: time="2025-03-17T18:52:16.255416360Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:52:16.255504 env[1450]: time="2025-03-17T18:52:16.255491360Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268379760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268440880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268454640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268488080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268503120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268517720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.268679 env[1450]: time="2025-03-17T18:52:16.268530200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269173040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269202040Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269217200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269231920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269244440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269368960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269443080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269661920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269710280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.269846 env[1450]: time="2025-03-17T18:52:16.269728240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270142480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270165920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270178360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270247200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270271720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270284960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270298600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270310400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270324320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270478960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270497800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270511240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270530000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:52:16.271742 env[1450]: time="2025-03-17T18:52:16.270546320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:52:16.272062 env[1450]: time="2025-03-17T18:52:16.270558520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:52:16.272062 env[1450]: time="2025-03-17T18:52:16.270576600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:52:16.272062 env[1450]: time="2025-03-17T18:52:16.270610760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:52:16.272119 env[1450]: time="2025-03-17T18:52:16.270849160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:52:16.272119 env[1450]: time="2025-03-17T18:52:16.270905000Z" level=info msg="Connect containerd service" Mar 17 18:52:16.272119 env[1450]: time="2025-03-17T18:52:16.270933000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:52:16.272119 env[1450]: time="2025-03-17T18:52:16.271540920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.272341760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.272387600Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277715760Z" level=info msg="Start subscribing containerd event" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277775320Z" level=info msg="Start recovering state" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277842680Z" level=info msg="Start event monitor" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277859240Z" level=info msg="Start snapshots syncer" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277869200Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.277876480Z" level=info msg="Start streaming server" Mar 17 18:52:16.285592 env[1450]: time="2025-03-17T18:52:16.279698200Z" level=info msg="containerd successfully booted in 0.073503s" Mar 17 18:52:16.272498 systemd[1]: Started containerd.service. Mar 17 18:52:16.366696 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:52:16.367190 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:52:16.367782 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:52:16.472026 dbus-daemon[1427]: [system] SELinux support is enabled Mar 17 18:52:16.472205 systemd[1]: Started dbus.service. Mar 17 18:52:16.477651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:52:16.477677 systemd[1]: Reached target system-config.target. Mar 17 18:52:16.478820 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:52:16.486845 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:52:16.486868 systemd[1]: Reached target user-config.target. Mar 17 18:52:16.491393 systemd[1]: Started systemd-logind.service. Mar 17 18:52:16.748981 update_engine[1442]: I0317 18:52:16.735070 1442 main.cc:92] Flatcar Update Engine starting Mar 17 18:52:16.812280 systemd[1]: Started update-engine.service. Mar 17 18:52:16.812666 update_engine[1442]: I0317 18:52:16.812559 1442 update_check_scheduler.cc:74] Next update check in 9m16s Mar 17 18:52:16.819403 systemd[1]: Started locksmithd.service. Mar 17 18:52:16.954897 systemd[1]: Started kubelet.service. Mar 17 18:52:17.398657 kubelet[1531]: E0317 18:52:17.398611 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:52:17.400035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:52:17.400151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:52:17.953825 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:52:17.990028 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:52:18.007930 systemd[1]: Finished sshd-keygen.service. Mar 17 18:52:18.014000 systemd[1]: Starting issuegen.service... Mar 17 18:52:18.019903 systemd[1]: Started waagent.service. Mar 17 18:52:18.024570 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:52:18.024771 systemd[1]: Finished issuegen.service. Mar 17 18:52:18.030450 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:52:18.051887 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:52:18.058439 systemd[1]: Started getty@tty1.service. Mar 17 18:52:18.064291 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:52:18.069149 systemd[1]: Reached target getty.target. Mar 17 18:52:18.073323 systemd[1]: Reached target multi-user.target. Mar 17 18:52:18.079247 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:52:18.090505 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:52:18.090692 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:52:18.096267 systemd[1]: Startup finished in 721ms (kernel) + 11.697s (initrd) + 20.159s (userspace) = 32.577s. Mar 17 18:52:18.641391 login[1555]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:52:18.643207 login[1556]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:52:18.695330 systemd[1]: Created slice user-500.slice. Mar 17 18:52:18.696463 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:52:18.699879 systemd-logind[1438]: New session 2 of user core. Mar 17 18:52:18.720132 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:52:18.721612 systemd[1]: Starting user@500.service... Mar 17 18:52:18.740660 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:52:18.944619 systemd[1559]: Queued start job for default target default.target. Mar 17 18:52:18.945142 systemd[1559]: Reached target paths.target. Mar 17 18:52:18.945161 systemd[1559]: Reached target sockets.target. Mar 17 18:52:18.945173 systemd[1559]: Reached target timers.target. Mar 17 18:52:18.945182 systemd[1559]: Reached target basic.target. Mar 17 18:52:18.945226 systemd[1559]: Reached target default.target. Mar 17 18:52:18.945249 systemd[1559]: Startup finished in 198ms. Mar 17 18:52:18.945295 systemd[1]: Started user@500.service. Mar 17 18:52:18.946205 systemd[1]: Started session-2.scope. Mar 17 18:52:19.642070 login[1555]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:52:19.646485 systemd[1]: Started session-1.scope. Mar 17 18:52:19.647605 systemd-logind[1438]: New session 1 of user core. Mar 17 18:52:23.680034 waagent[1552]: 2025-03-17T18:52:23.679922Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:52:23.716011 waagent[1552]: 2025-03-17T18:52:23.715914Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:52:23.720789 waagent[1552]: 2025-03-17T18:52:23.720694Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:52:23.733937 waagent[1552]: 2025-03-17T18:52:23.733839Z INFO Daemon Daemon Run daemon Mar 17 18:52:23.738504 waagent[1552]: 2025-03-17T18:52:23.738431Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:52:23.755409 waagent[1552]: 2025-03-17T18:52:23.755267Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:52:23.770719 waagent[1552]: 2025-03-17T18:52:23.770586Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:52:23.780818 waagent[1552]: 2025-03-17T18:52:23.780699Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:52:23.786125 waagent[1552]: 2025-03-17T18:52:23.786041Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:52:23.792359 waagent[1552]: 2025-03-17T18:52:23.792280Z INFO Daemon Daemon Activate resource disk Mar 17 18:52:23.797491 waagent[1552]: 2025-03-17T18:52:23.797419Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:52:23.811845 waagent[1552]: 2025-03-17T18:52:23.811759Z INFO Daemon Daemon Found device: None Mar 17 18:52:23.816866 waagent[1552]: 2025-03-17T18:52:23.816787Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:52:23.825437 waagent[1552]: 2025-03-17T18:52:23.825346Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:52:23.837911 waagent[1552]: 2025-03-17T18:52:23.837838Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:52:23.843970 waagent[1552]: 2025-03-17T18:52:23.843891Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:52:23.857392 waagent[1552]: 2025-03-17T18:52:23.857257Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:52:23.873427 waagent[1552]: 2025-03-17T18:52:23.873283Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:52:23.883449 waagent[1552]: 2025-03-17T18:52:23.883356Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:52:23.888675 waagent[1552]: 2025-03-17T18:52:23.888589Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:52:23.973023 waagent[1552]: 2025-03-17T18:52:23.972832Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:52:24.127459 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:52:24.184497 waagent[1552]: 2025-03-17T18:52:24.184331Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:52:24.189881 waagent[1552]: 2025-03-17T18:52:24.189790Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:52:24.196590 waagent[1552]: 2025-03-17T18:52:24.196507Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:52:24.203306 waagent[1552]: 2025-03-17T18:52:24.203226Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:52:24.208817 waagent[1552]: 2025-03-17T18:52:24.208731Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:52:24.214127 waagent[1552]: 2025-03-17T18:52:24.214058Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:52:24.316516 waagent[1552]: 2025-03-17T18:52:24.316400Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:52:24.324043 waagent[1552]: 2025-03-17T18:52:24.323991Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:52:24.329635 waagent[1552]: 2025-03-17T18:52:24.329544Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:52:24.725452 waagent[1552]: 2025-03-17T18:52:24.725252Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:52:24.741273 waagent[1552]: 2025-03-17T18:52:24.741188Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:52:24.747430 waagent[1552]: 2025-03-17T18:52:24.747352Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:52:24.837797 waagent[1552]: 2025-03-17T18:52:24.837650Z INFO Daemon Daemon Found private key matching thumbprint 43BFE01D6F67BBDFA6E5BA3A44BA085A2500D5E6 Mar 17 18:52:24.846369 waagent[1552]: 2025-03-17T18:52:24.846285Z INFO Daemon Daemon Certificate with thumbprint 8F8B41FD1746D3A8C4057AEF131B50485E71CE21 has no matching private key. Mar 17 18:52:24.856252 waagent[1552]: 2025-03-17T18:52:24.856173Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:52:24.911231 waagent[1552]: 2025-03-17T18:52:24.911169Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a50ac2b0-e7df-4a13-b904-c02836b75074 New eTag: 14258187823354120750] Mar 17 18:52:24.923618 waagent[1552]: 2025-03-17T18:52:24.923525Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:52:24.939529 waagent[1552]: 2025-03-17T18:52:24.939457Z INFO Daemon Daemon Starting provisioning Mar 17 18:52:24.944911 waagent[1552]: 2025-03-17T18:52:24.944799Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:52:24.950159 waagent[1552]: 2025-03-17T18:52:24.950056Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-95dfbd75e4] Mar 17 18:52:24.997413 waagent[1552]: 2025-03-17T18:52:24.997275Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-95dfbd75e4] Mar 17 18:52:25.004416 waagent[1552]: 2025-03-17T18:52:25.004292Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:52:25.012082 waagent[1552]: 2025-03-17T18:52:25.011968Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:52:25.032812 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:52:25.033053 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:52:25.033132 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:52:25.033515 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:52:25.037813 systemd-networkd[1223]: eth0: DHCPv6 lease lost Mar 17 18:52:25.038265 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.039447 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:52:25.039667 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:52:25.042617 systemd[1]: Starting systemd-networkd.service... Mar 17 18:52:25.074311 systemd-networkd[1603]: enP9617s1: Link UP Mar 17 18:52:25.074323 systemd-networkd[1603]: enP9617s1: Gained carrier Mar 17 18:52:25.075201 systemd-networkd[1603]: eth0: Link UP Mar 17 18:52:25.075212 systemd-networkd[1603]: eth0: Gained carrier Mar 17 18:52:25.075515 systemd-networkd[1603]: lo: Link UP Mar 17 18:52:25.075524 systemd-networkd[1603]: lo: Gained carrier Mar 17 18:52:25.075822 systemd-networkd[1603]: eth0: Gained IPv6LL Mar 17 18:52:25.076031 systemd-networkd[1603]: Enumeration completed Mar 17 18:52:25.076137 systemd[1]: Started systemd-networkd.service. Mar 17 18:52:25.077140 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.077948 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:52:25.079819 systemd-networkd[1603]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:52:25.080852 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.084723 waagent[1552]: 2025-03-17T18:52:25.084568Z INFO Daemon Daemon Create user account if not exists Mar 17 18:52:25.090308 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.090787 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.092124 waagent[1552]: 2025-03-17T18:52:25.092031Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:52:25.097865 waagent[1552]: 2025-03-17T18:52:25.097776Z INFO Daemon Daemon Configure sudoer Mar 17 18:52:25.103061 waagent[1552]: 2025-03-17T18:52:25.102988Z INFO Daemon Daemon Configure sshd Mar 17 18:52:25.107684 waagent[1552]: 2025-03-17T18:52:25.107619Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:52:25.107870 systemd-networkd[1603]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:52:25.112489 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 18:52:25.112729 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:52:26.303347 waagent[1552]: 2025-03-17T18:52:26.303275Z INFO Daemon Daemon Provisioning complete Mar 17 18:52:26.321379 waagent[1552]: 2025-03-17T18:52:26.321312Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:52:26.327862 waagent[1552]: 2025-03-17T18:52:26.327783Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:52:26.338867 waagent[1552]: 2025-03-17T18:52:26.338785Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:52:26.651654 waagent[1612]: 2025-03-17T18:52:26.651493Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:52:26.659103 waagent[1612]: 2025-03-17T18:52:26.659025Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:26.659257 waagent[1612]: 2025-03-17T18:52:26.659210Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:26.672031 waagent[1612]: 2025-03-17T18:52:26.671953Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:52:26.672211 waagent[1612]: 2025-03-17T18:52:26.672162Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:52:26.743679 waagent[1612]: 2025-03-17T18:52:26.743532Z INFO ExtHandler ExtHandler Found private key matching thumbprint 43BFE01D6F67BBDFA6E5BA3A44BA085A2500D5E6 Mar 17 18:52:26.743930 waagent[1612]: 2025-03-17T18:52:26.743874Z INFO ExtHandler ExtHandler Certificate with thumbprint 8F8B41FD1746D3A8C4057AEF131B50485E71CE21 has no matching private key. Mar 17 18:52:26.744154 waagent[1612]: 2025-03-17T18:52:26.744106Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:52:26.758447 waagent[1612]: 2025-03-17T18:52:26.758391Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 708651b9-b7c4-4469-939d-65da45c8e619 New eTag: 14258187823354120750] Mar 17 18:52:26.759033 waagent[1612]: 2025-03-17T18:52:26.758972Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:52:26.810397 waagent[1612]: 2025-03-17T18:52:26.810252Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:52:26.820498 waagent[1612]: 2025-03-17T18:52:26.820414Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1612 Mar 17 18:52:26.824293 waagent[1612]: 2025-03-17T18:52:26.824223Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:52:26.825629 waagent[1612]: 2025-03-17T18:52:26.825572Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:52:26.948068 waagent[1612]: 2025-03-17T18:52:26.947940Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:52:26.948444 waagent[1612]: 2025-03-17T18:52:26.948384Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:52:26.956278 waagent[1612]: 2025-03-17T18:52:26.956213Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:52:26.956821 waagent[1612]: 2025-03-17T18:52:26.956732Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:52:26.958062 waagent[1612]: 2025-03-17T18:52:26.957992Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:52:26.959469 waagent[1612]: 2025-03-17T18:52:26.959397Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:52:26.960123 waagent[1612]: 2025-03-17T18:52:26.960060Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:26.960392 waagent[1612]: 2025-03-17T18:52:26.960343Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:26.961062 waagent[1612]: 2025-03-17T18:52:26.961004Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:52:26.961458 waagent[1612]: 2025-03-17T18:52:26.961402Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:52:26.961458 waagent[1612]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:52:26.961458 waagent[1612]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:52:26.961458 waagent[1612]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:52:26.961458 waagent[1612]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:26.961458 waagent[1612]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:26.961458 waagent[1612]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:26.963894 waagent[1612]: 2025-03-17T18:52:26.963702Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:52:26.965053 waagent[1612]: 2025-03-17T18:52:26.964986Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:26.965345 waagent[1612]: 2025-03-17T18:52:26.965293Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:26.966041 waagent[1612]: 2025-03-17T18:52:26.965974Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:52:26.966232 waagent[1612]: 2025-03-17T18:52:26.966182Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:52:26.966344 waagent[1612]: 2025-03-17T18:52:26.966101Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:52:26.966636 waagent[1612]: 2025-03-17T18:52:26.966571Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:52:26.967243 waagent[1612]: 2025-03-17T18:52:26.967149Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:52:26.967334 waagent[1612]: 2025-03-17T18:52:26.967276Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:52:26.967764 waagent[1612]: 2025-03-17T18:52:26.967684Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:52:26.970154 waagent[1612]: 2025-03-17T18:52:26.970097Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:52:26.978121 waagent[1612]: 2025-03-17T18:52:26.978043Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:52:26.979620 waagent[1612]: 2025-03-17T18:52:26.979552Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:52:26.983026 waagent[1612]: 2025-03-17T18:52:26.982958Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:52:27.029598 waagent[1612]: 2025-03-17T18:52:27.029467Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1603' Mar 17 18:52:27.040522 waagent[1612]: 2025-03-17T18:52:27.040420Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:52:27.105404 waagent[1612]: 2025-03-17T18:52:27.105276Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:52:27.105404 waagent[1612]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:52:27.105404 waagent[1612]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:52:27.105404 waagent[1612]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d5:17 brd ff:ff:ff:ff:ff:ff Mar 17 18:52:27.105404 waagent[1612]: 3: enP9617s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d5:17 brd ff:ff:ff:ff:ff:ff\ altname enP9617p0s2 Mar 17 18:52:27.105404 waagent[1612]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:52:27.105404 waagent[1612]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:52:27.105404 waagent[1612]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:52:27.105404 waagent[1612]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:52:27.105404 waagent[1612]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:52:27.105404 waagent[1612]: 2: eth0 inet6 fe80::20d:3aff:fe06:d517/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:52:27.272328 waagent[1612]: 2025-03-17T18:52:27.272220Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:52:27.342430 waagent[1552]: 2025-03-17T18:52:27.342302Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:52:27.347069 waagent[1552]: 2025-03-17T18:52:27.347014Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:52:27.596413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:52:27.596537 systemd[1]: Stopped kubelet.service. Mar 17 18:52:27.597923 systemd[1]: Starting kubelet.service... Mar 17 18:52:27.684411 systemd[1]: Started kubelet.service. Mar 17 18:52:27.759404 kubelet[1647]: E0317 18:52:27.759365 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:52:27.762081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:52:27.762202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:52:28.640206 waagent[1640]: 2025-03-17T18:52:28.640109Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:52:28.640918 waagent[1640]: 2025-03-17T18:52:28.640853Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:52:28.641049 waagent[1640]: 2025-03-17T18:52:28.641003Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:52:28.641169 waagent[1640]: 2025-03-17T18:52:28.641129Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 17 18:52:28.649405 waagent[1640]: 2025-03-17T18:52:28.649295Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:52:28.649817 waagent[1640]: 2025-03-17T18:52:28.649731Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:28.649978 waagent[1640]: 2025-03-17T18:52:28.649931Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:28.663169 waagent[1640]: 2025-03-17T18:52:28.663102Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:52:28.672038 waagent[1640]: 2025-03-17T18:52:28.671984Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:52:28.673049 waagent[1640]: 2025-03-17T18:52:28.672991Z INFO ExtHandler Mar 17 18:52:28.673195 waagent[1640]: 2025-03-17T18:52:28.673149Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 131de00b-cee1-45f6-a97a-39f5069dc5ec eTag: 14258187823354120750 source: Fabric] Mar 17 18:52:28.673950 waagent[1640]: 2025-03-17T18:52:28.673893Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:52:28.675187 waagent[1640]: 2025-03-17T18:52:28.675126Z INFO ExtHandler Mar 17 18:52:28.675319 waagent[1640]: 2025-03-17T18:52:28.675274Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:52:28.682430 waagent[1640]: 2025-03-17T18:52:28.682380Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:52:28.682925 waagent[1640]: 2025-03-17T18:52:28.682873Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:52:28.707370 waagent[1640]: 2025-03-17T18:52:28.707313Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:52:28.778725 waagent[1640]: 2025-03-17T18:52:28.778577Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F8B41FD1746D3A8C4057AEF131B50485E71CE21', 'hasPrivateKey': False} Mar 17 18:52:28.779834 waagent[1640]: 2025-03-17T18:52:28.779771Z INFO ExtHandler Downloaded certificate {'thumbprint': '43BFE01D6F67BBDFA6E5BA3A44BA085A2500D5E6', 'hasPrivateKey': True} Mar 17 18:52:28.780894 waagent[1640]: 2025-03-17T18:52:28.780834Z INFO ExtHandler Fetch goal state completed Mar 17 18:52:28.801818 waagent[1640]: 2025-03-17T18:52:28.801694Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:52:28.814031 waagent[1640]: 2025-03-17T18:52:28.813932Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1640 Mar 17 18:52:28.817351 waagent[1640]: 2025-03-17T18:52:28.817287Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:52:28.818458 waagent[1640]: 2025-03-17T18:52:28.818399Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:52:28.818782 waagent[1640]: 2025-03-17T18:52:28.818712Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:52:28.820887 waagent[1640]: 2025-03-17T18:52:28.820830Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:52:28.826128 waagent[1640]: 2025-03-17T18:52:28.826067Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:52:28.826520 waagent[1640]: 2025-03-17T18:52:28.826461Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:52:28.834428 waagent[1640]: 2025-03-17T18:52:28.834363Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:52:28.834971 waagent[1640]: 2025-03-17T18:52:28.834909Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:52:28.841404 waagent[1640]: 2025-03-17T18:52:28.841286Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 18:52:28.842533 waagent[1640]: 2025-03-17T18:52:28.842464Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:52:28.844165 waagent[1640]: 2025-03-17T18:52:28.844093Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:52:28.845087 waagent[1640]: 2025-03-17T18:52:28.845023Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:28.845370 waagent[1640]: 2025-03-17T18:52:28.845319Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:28.846191 waagent[1640]: 2025-03-17T18:52:28.846118Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:52:28.846900 waagent[1640]: 2025-03-17T18:52:28.846824Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:52:28.847299 waagent[1640]: 2025-03-17T18:52:28.847234Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:52:28.847512 waagent[1640]: 2025-03-17T18:52:28.847434Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:52:28.847830 waagent[1640]: 2025-03-17T18:52:28.847732Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:52:28.848091 waagent[1640]: 2025-03-17T18:52:28.848024Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:52:28.848091 waagent[1640]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:52:28.848091 waagent[1640]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:52:28.848091 waagent[1640]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:52:28.848091 waagent[1640]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:28.848091 waagent[1640]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:28.848091 waagent[1640]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:52:28.848253 waagent[1640]: 2025-03-17T18:52:28.848124Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:52:28.848602 waagent[1640]: 2025-03-17T18:52:28.848528Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:52:28.849235 waagent[1640]: 2025-03-17T18:52:28.849155Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:52:28.849827 waagent[1640]: 2025-03-17T18:52:28.849652Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:52:28.849892 waagent[1640]: 2025-03-17T18:52:28.849838Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:52:28.852452 waagent[1640]: 2025-03-17T18:52:28.852315Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:52:28.852766 waagent[1640]: 2025-03-17T18:52:28.852682Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:52:28.872320 waagent[1640]: 2025-03-17T18:52:28.872172Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:52:28.890483 waagent[1640]: 2025-03-17T18:52:28.890354Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:52:28.890483 waagent[1640]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:52:28.890483 waagent[1640]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:52:28.890483 waagent[1640]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d5:17 brd ff:ff:ff:ff:ff:ff Mar 17 18:52:28.890483 waagent[1640]: 3: enP9617s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d5:17 brd ff:ff:ff:ff:ff:ff\ altname enP9617p0s2 Mar 17 18:52:28.890483 waagent[1640]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:52:28.890483 waagent[1640]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:52:28.890483 waagent[1640]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:52:28.890483 waagent[1640]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:52:28.890483 waagent[1640]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:52:28.890483 waagent[1640]: 2: eth0 inet6 fe80::20d:3aff:fe06:d517/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:52:28.894453 waagent[1640]: 2025-03-17T18:52:28.894270Z INFO ExtHandler ExtHandler Mar 17 18:52:28.895168 waagent[1640]: 2025-03-17T18:52:28.895088Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3de99538-c197-4bcf-88f6-bafbfe15b6db correlation 21626d20-f279-413a-bb6b-a2b991b3b7ce created: 2025-03-17T18:51:02.817230Z] Mar 17 18:52:28.899043 waagent[1640]: 2025-03-17T18:52:28.898966Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:52:28.905448 waagent[1640]: 2025-03-17T18:52:28.905371Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 11 ms] Mar 17 18:52:28.928568 waagent[1640]: 2025-03-17T18:52:28.928495Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:52:28.954189 waagent[1640]: 2025-03-17T18:52:28.954112Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C98CE73F-51E4-44C0-B3A3-729BAC6303D9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:52:29.131565 waagent[1640]: 2025-03-17T18:52:29.131444Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 17 18:52:29.131565 waagent[1640]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.131565 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.131565 waagent[1640]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.131565 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.131565 waagent[1640]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.131565 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.131565 waagent[1640]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:52:29.131565 waagent[1640]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:52:29.131565 waagent[1640]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:52:29.139345 waagent[1640]: 2025-03-17T18:52:29.139237Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:52:29.139345 waagent[1640]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.139345 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.139345 waagent[1640]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.139345 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.139345 waagent[1640]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:52:29.139345 waagent[1640]: pkts bytes target prot opt in out source destination Mar 17 18:52:29.139345 waagent[1640]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:52:29.139345 waagent[1640]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:52:29.139345 waagent[1640]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:52:29.140158 waagent[1640]: 2025-03-17T18:52:29.140111Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:52:37.929766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:52:37.929947 systemd[1]: Stopped kubelet.service. Mar 17 18:52:37.931333 systemd[1]: Starting kubelet.service... Mar 17 18:52:38.157313 systemd[1]: Started kubelet.service. Mar 17 18:52:38.193201 kubelet[1707]: E0317 18:52:38.193030 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:52:38.195444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:52:38.195568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:52:48.429797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:52:48.429974 systemd[1]: Stopped kubelet.service. Mar 17 18:52:48.431327 systemd[1]: Starting kubelet.service... Mar 17 18:52:48.677374 systemd[1]: Started kubelet.service. Mar 17 18:52:48.710260 kubelet[1716]: E0317 18:52:48.710159 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:52:48.712281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:52:48.712402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:52:55.337051 systemd-timesyncd[1402]: Contacted time server 23.168.136.132:123 (0.flatcar.pool.ntp.org). Mar 17 18:52:55.337108 systemd-timesyncd[1402]: Initial clock synchronization to Mon 2025-03-17 18:52:55.336491 UTC. Mar 17 18:52:56.701511 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 18:52:58.929826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:52:58.929998 systemd[1]: Stopped kubelet.service. Mar 17 18:52:58.931361 systemd[1]: Starting kubelet.service... Mar 17 18:52:59.132668 systemd[1]: Started kubelet.service. Mar 17 18:52:59.168804 kubelet[1725]: E0317 18:52:59.168728 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:52:59.171038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:52:59.171159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:53:02.557451 update_engine[1442]: I0317 18:53:02.557076 1442 update_attempter.cc:509] Updating boot flags... Mar 17 18:53:09.179742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:53:09.179943 systemd[1]: Stopped kubelet.service. Mar 17 18:53:09.181396 systemd[1]: Starting kubelet.service... Mar 17 18:53:09.428540 systemd[1]: Started kubelet.service. Mar 17 18:53:09.475452 kubelet[1776]: E0317 18:53:09.475333 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:53:09.477067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:53:09.477188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:53:14.505235 systemd[1]: Created slice system-sshd.slice. Mar 17 18:53:14.506968 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:51718.service. Mar 17 18:53:15.152182 sshd[1782]: Accepted publickey for core from 10.200.16.10 port 51718 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:15.178994 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:15.183579 systemd[1]: Started session-3.scope. Mar 17 18:53:15.184626 systemd-logind[1438]: New session 3 of user core. Mar 17 18:53:15.568312 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:51732.service. Mar 17 18:53:16.015584 sshd[1787]: Accepted publickey for core from 10.200.16.10 port 51732 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:16.017211 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:16.021207 systemd-logind[1438]: New session 4 of user core. Mar 17 18:53:16.021737 systemd[1]: Started session-4.scope. Mar 17 18:53:16.340962 sshd[1787]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:16.343579 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:51732.service: Deactivated successfully. Mar 17 18:53:16.344474 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:53:16.345074 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:53:16.346077 systemd-logind[1438]: Removed session 4. Mar 17 18:53:16.415205 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:51746.service. Mar 17 18:53:16.863034 sshd[1793]: Accepted publickey for core from 10.200.16.10 port 51746 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:16.864915 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:16.869602 systemd[1]: Started session-5.scope. Mar 17 18:53:16.870224 systemd-logind[1438]: New session 5 of user core. Mar 17 18:53:17.184327 sshd[1793]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:17.187176 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:51746.service: Deactivated successfully. Mar 17 18:53:17.187914 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:53:17.188461 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:53:17.189344 systemd-logind[1438]: Removed session 5. Mar 17 18:53:17.261445 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:51758.service. Mar 17 18:53:17.722385 sshd[1799]: Accepted publickey for core from 10.200.16.10 port 51758 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:17.723827 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:17.727804 systemd-logind[1438]: New session 6 of user core. Mar 17 18:53:17.728455 systemd[1]: Started session-6.scope. Mar 17 18:53:18.051017 sshd[1799]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:18.053859 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:51758.service: Deactivated successfully. Mar 17 18:53:18.054598 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:53:18.055210 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:53:18.056167 systemd-logind[1438]: Removed session 6. Mar 17 18:53:18.125568 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:51774.service. Mar 17 18:53:18.574580 sshd[1805]: Accepted publickey for core from 10.200.16.10 port 51774 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:18.575923 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:18.579802 systemd-logind[1438]: New session 7 of user core. Mar 17 18:53:18.580287 systemd[1]: Started session-7.scope. Mar 17 18:53:19.019603 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:53:19.020353 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:53:19.035149 systemd[1]: Starting coreos-metadata.service... Mar 17 18:53:19.110302 coreos-metadata[1812]: Mar 17 18:53:19.110 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:53:19.116635 coreos-metadata[1812]: Mar 17 18:53:19.116 INFO Fetch successful Mar 17 18:53:19.116864 coreos-metadata[1812]: Mar 17 18:53:19.116 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 18:53:19.118795 coreos-metadata[1812]: Mar 17 18:53:19.118 INFO Fetch successful Mar 17 18:53:19.119096 coreos-metadata[1812]: Mar 17 18:53:19.119 INFO Fetching http://168.63.129.16/machine/a99203a5-9f9b-4608-b649-a3250edcdfa0/ac0156a4%2Daac0%2D4160%2D93c4%2D88926664fcb2.%5Fci%2D3510.3.7%2Da%2D95dfbd75e4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 18:53:19.121193 coreos-metadata[1812]: Mar 17 18:53:19.121 INFO Fetch successful Mar 17 18:53:19.155245 coreos-metadata[1812]: Mar 17 18:53:19.155 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:53:19.166247 coreos-metadata[1812]: Mar 17 18:53:19.166 INFO Fetch successful Mar 17 18:53:19.175730 systemd[1]: Finished coreos-metadata.service. Mar 17 18:53:19.538147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:53:19.538371 systemd[1]: Stopped kubelet.service. Mar 17 18:53:19.539988 systemd[1]: Starting kubelet.service... Mar 17 18:53:19.646586 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:53:19.646678 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:53:19.646927 systemd[1]: Stopped kubelet.service. Mar 17 18:53:19.649273 systemd[1]: Starting kubelet.service... Mar 17 18:53:19.678165 systemd[1]: Reloading. Mar 17 18:53:19.760597 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-03-17T18:53:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:53:19.763046 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-03-17T18:53:19Z" level=info msg="torcx already run" Mar 17 18:53:19.847344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:53:19.847861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:53:19.866174 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:53:19.985150 systemd[1]: Stopping kubelet.service... Mar 17 18:53:19.985955 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:53:19.986274 systemd[1]: Stopped kubelet.service. Mar 17 18:53:19.989180 systemd[1]: Starting kubelet.service... Mar 17 18:53:20.191636 systemd[1]: Started kubelet.service. Mar 17 18:53:20.237883 kubelet[1933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:53:20.238260 kubelet[1933]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:53:20.238307 kubelet[1933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:53:20.238462 kubelet[1933]: I0317 18:53:20.238428 1933 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:53:21.596582 kubelet[1933]: I0317 18:53:21.596530 1933 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:53:21.596582 kubelet[1933]: I0317 18:53:21.596565 1933 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:53:21.596981 kubelet[1933]: I0317 18:53:21.596959 1933 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:53:21.621531 kubelet[1933]: I0317 18:53:21.621492 1933 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:53:21.628478 kubelet[1933]: E0317 18:53:21.628435 1933 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:53:21.628478 kubelet[1933]: I0317 18:53:21.628474 1933 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:53:21.633171 kubelet[1933]: I0317 18:53:21.633142 1933 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:53:21.633805 kubelet[1933]: I0317 18:53:21.633267 1933 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:53:21.633953 kubelet[1933]: I0317 18:53:21.633917 1933 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:53:21.634541 kubelet[1933]: I0317 18:53:21.633953 1933 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:53:21.634645 kubelet[1933]: I0317 18:53:21.634555 1933 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:53:21.634645 kubelet[1933]: I0317 18:53:21.634566 1933 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:53:21.634697 kubelet[1933]: I0317 18:53:21.634690 1933 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:53:21.638539 kubelet[1933]: I0317 18:53:21.638489 1933 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:53:21.638539 kubelet[1933]: I0317 18:53:21.638545 1933 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:53:21.638676 kubelet[1933]: I0317 18:53:21.638578 1933 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:53:21.638676 kubelet[1933]: I0317 18:53:21.638589 1933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:53:21.640413 kubelet[1933]: E0317 18:53:21.640381 1933 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:21.640490 kubelet[1933]: E0317 18:53:21.640435 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:21.641440 kubelet[1933]: I0317 18:53:21.641420 1933 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:53:21.643423 kubelet[1933]: I0317 18:53:21.643398 1933 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:53:21.644073 kubelet[1933]: W0317 18:53:21.644051 1933 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:53:21.644781 kubelet[1933]: I0317 18:53:21.644760 1933 server.go:1269] "Started kubelet" Mar 17 18:53:21.652655 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:53:21.652943 kubelet[1933]: I0317 18:53:21.652912 1933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:53:21.657867 kubelet[1933]: I0317 18:53:21.657812 1933 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:53:21.658760 kubelet[1933]: I0317 18:53:21.658717 1933 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:53:21.659866 kubelet[1933]: I0317 18:53:21.659710 1933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:53:21.660090 kubelet[1933]: I0317 18:53:21.660062 1933 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:53:21.660305 kubelet[1933]: I0317 18:53:21.660277 1933 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:53:21.662249 kubelet[1933]: I0317 18:53:21.662220 1933 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:53:21.662529 kubelet[1933]: E0317 18:53:21.662501 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:21.667905 kubelet[1933]: E0317 18:53:21.667871 1933 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:53:21.668344 kubelet[1933]: W0317 18:53:21.668324 1933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 18:53:21.668469 kubelet[1933]: E0317 18:53:21.668452 1933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:53:21.668566 kubelet[1933]: W0317 18:53:21.668547 1933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 18:53:21.668641 kubelet[1933]: E0317 18:53:21.668628 1933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.20.24\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:53:21.668722 kubelet[1933]: I0317 18:53:21.668690 1933 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:53:21.668826 kubelet[1933]: I0317 18:53:21.668809 1933 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:53:21.669180 kubelet[1933]: E0317 18:53:21.663761 1933 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.24.182dabde305cb10f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.24,UID:10.200.20.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.24,},FirstTimestamp:2025-03-17 18:53:21.644720399 +0000 UTC m=+1.448720509,LastTimestamp:2025-03-17 18:53:21.644720399 +0000 UTC m=+1.448720509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.24,}" Mar 17 18:53:21.671086 kubelet[1933]: I0317 18:53:21.671053 1933 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:53:21.671204 kubelet[1933]: I0317 18:53:21.671178 1933 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:53:21.675134 kubelet[1933]: I0317 18:53:21.675105 1933 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:53:21.694638 kubelet[1933]: E0317 18:53:21.694577 1933 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.24\" not found" node="10.200.20.24" Mar 17 18:53:21.698530 kubelet[1933]: I0317 18:53:21.698453 1933 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:53:21.698530 kubelet[1933]: I0317 18:53:21.698518 1933 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:53:21.698709 kubelet[1933]: I0317 18:53:21.698598 1933 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:53:21.707816 kubelet[1933]: I0317 18:53:21.707784 1933 policy_none.go:49] "None policy: Start" Mar 17 18:53:21.710689 kubelet[1933]: I0317 18:53:21.710656 1933 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:53:21.710838 kubelet[1933]: I0317 18:53:21.710714 1933 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:53:21.727177 systemd[1]: Created slice kubepods.slice. Mar 17 18:53:21.732901 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:53:21.736438 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:53:21.743884 kubelet[1933]: I0317 18:53:21.743855 1933 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:53:21.747166 kubelet[1933]: I0317 18:53:21.747140 1933 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:53:21.748285 kubelet[1933]: I0317 18:53:21.748222 1933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:53:21.750172 kubelet[1933]: I0317 18:53:21.750141 1933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:53:21.750319 kubelet[1933]: I0317 18:53:21.750147 1933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:53:21.751438 kubelet[1933]: E0317 18:53:21.751418 1933 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.24\" not found" Mar 17 18:53:21.752200 kubelet[1933]: I0317 18:53:21.752165 1933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:53:21.752200 kubelet[1933]: I0317 18:53:21.752198 1933 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:53:21.752311 kubelet[1933]: I0317 18:53:21.752217 1933 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:53:21.752311 kubelet[1933]: E0317 18:53:21.752260 1933 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:53:21.849318 kubelet[1933]: I0317 18:53:21.849218 1933 kubelet_node_status.go:72] "Attempting to register node" node="10.200.20.24" Mar 17 18:53:21.857567 kubelet[1933]: I0317 18:53:21.857530 1933 kubelet_node_status.go:75] "Successfully registered node" node="10.200.20.24" Mar 17 18:53:21.857567 kubelet[1933]: E0317 18:53:21.857567 1933 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.200.20.24\": node \"10.200.20.24\" not found" Mar 17 18:53:21.869303 kubelet[1933]: E0317 18:53:21.869259 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:21.970112 kubelet[1933]: E0317 18:53:21.970069 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.070330 kubelet[1933]: E0317 18:53:22.070300 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.171115 kubelet[1933]: E0317 18:53:22.171032 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.219503 sudo[1808]: pam_unix(sudo:session): session closed for user root Mar 17 18:53:22.271668 kubelet[1933]: E0317 18:53:22.271631 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.319984 sshd[1805]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:22.322414 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:53:22.323134 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:53:22.323250 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:51774.service: Deactivated successfully. Mar 17 18:53:22.324345 systemd-logind[1438]: Removed session 7. Mar 17 18:53:22.372040 kubelet[1933]: E0317 18:53:22.371998 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.472393 kubelet[1933]: E0317 18:53:22.472368 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.572787 kubelet[1933]: E0317 18:53:22.572763 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.600920 kubelet[1933]: I0317 18:53:22.600885 1933 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:53:22.601289 kubelet[1933]: W0317 18:53:22.601055 1933 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:53:22.601289 kubelet[1933]: W0317 18:53:22.601093 1933 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:53:22.641131 kubelet[1933]: E0317 18:53:22.641097 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:22.673458 kubelet[1933]: E0317 18:53:22.673423 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.774872 kubelet[1933]: E0317 18:53:22.774463 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.874806 kubelet[1933]: E0317 18:53:22.874767 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:22.974921 kubelet[1933]: E0317 18:53:22.974887 1933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Mar 17 18:53:23.076440 kubelet[1933]: I0317 18:53:23.076193 1933 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:53:23.076901 env[1450]: time="2025-03-17T18:53:23.076794584Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:53:23.077458 kubelet[1933]: I0317 18:53:23.077442 1933 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:53:23.641832 kubelet[1933]: I0317 18:53:23.641790 1933 apiserver.go:52] "Watching apiserver" Mar 17 18:53:23.642182 kubelet[1933]: E0317 18:53:23.641852 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:23.650260 systemd[1]: Created slice kubepods-besteffort-pod28f42c69_9e8d_4d9a_b123_283555bd518d.slice. Mar 17 18:53:23.662940 systemd[1]: Created slice kubepods-burstable-pod651bdb21_66a7_4e84_8e44_78d197cc2f79.slice. Mar 17 18:53:23.669640 kubelet[1933]: I0317 18:53:23.669584 1933 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:53:23.679295 kubelet[1933]: I0317 18:53:23.679229 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-config-path\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679295 kubelet[1933]: I0317 18:53:23.679280 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-net\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679295 kubelet[1933]: I0317 18:53:23.679300 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7hg\" (UniqueName: \"kubernetes.io/projected/28f42c69-9e8d-4d9a-b123-283555bd518d-kube-api-access-mq7hg\") pod \"kube-proxy-9rlqt\" (UID: \"28f42c69-9e8d-4d9a-b123-283555bd518d\") " pod="kube-system/kube-proxy-9rlqt" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679332 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-bpf-maps\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679349 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-hostproc\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679364 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651bdb21-66a7-4e84-8e44-78d197cc2f79-clustermesh-secrets\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679410 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-xtables-lock\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679428 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f42c69-9e8d-4d9a-b123-283555bd518d-lib-modules\") pod \"kube-proxy-9rlqt\" (UID: \"28f42c69-9e8d-4d9a-b123-283555bd518d\") " pod="kube-system/kube-proxy-9rlqt" Mar 17 18:53:23.679522 kubelet[1933]: I0317 18:53:23.679442 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-run\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679458 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-cgroup\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679473 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-etc-cni-netd\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679501 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cni-path\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679515 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-lib-modules\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679531 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-hubble-tls\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679661 kubelet[1933]: I0317 18:53:23.679550 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28f42c69-9e8d-4d9a-b123-283555bd518d-kube-proxy\") pod \"kube-proxy-9rlqt\" (UID: \"28f42c69-9e8d-4d9a-b123-283555bd518d\") " pod="kube-system/kube-proxy-9rlqt" Mar 17 18:53:23.679817 kubelet[1933]: I0317 18:53:23.679575 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f42c69-9e8d-4d9a-b123-283555bd518d-xtables-lock\") pod \"kube-proxy-9rlqt\" (UID: \"28f42c69-9e8d-4d9a-b123-283555bd518d\") " pod="kube-system/kube-proxy-9rlqt" Mar 17 18:53:23.679817 kubelet[1933]: I0317 18:53:23.679590 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-kernel\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.679817 kubelet[1933]: I0317 18:53:23.679611 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsns\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-kube-api-access-2dsns\") pod \"cilium-7lqgh\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " pod="kube-system/cilium-7lqgh" Mar 17 18:53:23.781546 kubelet[1933]: I0317 18:53:23.781496 1933 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:53:23.964562 env[1450]: time="2025-03-17T18:53:23.964513358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rlqt,Uid:28f42c69-9e8d-4d9a-b123-283555bd518d,Namespace:kube-system,Attempt:0,}" Mar 17 18:53:23.972239 env[1450]: time="2025-03-17T18:53:23.972061766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lqgh,Uid:651bdb21-66a7-4e84-8e44-78d197cc2f79,Namespace:kube-system,Attempt:0,}" Mar 17 18:53:24.642969 kubelet[1933]: E0317 18:53:24.642928 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:25.643630 kubelet[1933]: E0317 18:53:25.643547 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:25.817527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806472563.mount: Deactivated successfully. Mar 17 18:53:25.847513 env[1450]: time="2025-03-17T18:53:25.847440789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.853378 env[1450]: time="2025-03-17T18:53:25.853320127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.860973 env[1450]: time="2025-03-17T18:53:25.860931139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.867456 env[1450]: time="2025-03-17T18:53:25.867412835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.871356 env[1450]: time="2025-03-17T18:53:25.871307660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.874262 env[1450]: time="2025-03-17T18:53:25.874217049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.877888 env[1450]: time="2025-03-17T18:53:25.877844956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.881340 env[1450]: time="2025-03-17T18:53:25.881296743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:25.940582 env[1450]: time="2025-03-17T18:53:25.938511168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:53:25.940582 env[1450]: time="2025-03-17T18:53:25.938553208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:53:25.940582 env[1450]: time="2025-03-17T18:53:25.938563568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:53:25.940582 env[1450]: time="2025-03-17T18:53:25.938807327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e pid=1983 runtime=io.containerd.runc.v2 Mar 17 18:53:25.942915 env[1450]: time="2025-03-17T18:53:25.942715992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:53:25.942915 env[1450]: time="2025-03-17T18:53:25.942771632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:53:25.942915 env[1450]: time="2025-03-17T18:53:25.942783232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:53:25.943090 env[1450]: time="2025-03-17T18:53:25.942941791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3622644dee6e6606d7a461a18cbaaf8389de1ac3801c57f5ef8ccc14b3a173f5 pid=1996 runtime=io.containerd.runc.v2 Mar 17 18:53:25.959403 systemd[1]: Started cri-containerd-bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e.scope. Mar 17 18:53:25.974271 systemd[1]: Started cri-containerd-3622644dee6e6606d7a461a18cbaaf8389de1ac3801c57f5ef8ccc14b3a173f5.scope. Mar 17 18:53:26.005608 env[1450]: time="2025-03-17T18:53:26.005560118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rlqt,Uid:28f42c69-9e8d-4d9a-b123-283555bd518d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3622644dee6e6606d7a461a18cbaaf8389de1ac3801c57f5ef8ccc14b3a173f5\"" Mar 17 18:53:26.009356 env[1450]: time="2025-03-17T18:53:26.009299504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lqgh,Uid:651bdb21-66a7-4e84-8e44-78d197cc2f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\"" Mar 17 18:53:26.010503 env[1450]: time="2025-03-17T18:53:26.010455300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:53:26.644102 kubelet[1933]: E0317 18:53:26.644036 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:27.085032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152067619.mount: Deactivated successfully. Mar 17 18:53:27.584532 env[1450]: time="2025-03-17T18:53:27.584480376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:27.591803 env[1450]: time="2025-03-17T18:53:27.591760712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:27.597749 env[1450]: time="2025-03-17T18:53:27.597710893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:27.602944 env[1450]: time="2025-03-17T18:53:27.602907276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:27.603523 env[1450]: time="2025-03-17T18:53:27.603490914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:53:27.605820 env[1450]: time="2025-03-17T18:53:27.605359348Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:53:27.607207 env[1450]: time="2025-03-17T18:53:27.607174422Z" level=info msg="CreateContainer within sandbox \"3622644dee6e6606d7a461a18cbaaf8389de1ac3801c57f5ef8ccc14b3a173f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:53:27.641599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571565985.mount: Deactivated successfully. Mar 17 18:53:27.645472 kubelet[1933]: E0317 18:53:27.645399 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:27.671219 env[1450]: time="2025-03-17T18:53:27.671163131Z" level=info msg="CreateContainer within sandbox \"3622644dee6e6606d7a461a18cbaaf8389de1ac3801c57f5ef8ccc14b3a173f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28f8b1dfb9eb187c9bf1c25f481a81ed45e080b437d6ffd9064d1b341a6f8f5f\"" Mar 17 18:53:27.672378 env[1450]: time="2025-03-17T18:53:27.672346127Z" level=info msg="StartContainer for \"28f8b1dfb9eb187c9bf1c25f481a81ed45e080b437d6ffd9064d1b341a6f8f5f\"" Mar 17 18:53:27.687306 systemd[1]: Started cri-containerd-28f8b1dfb9eb187c9bf1c25f481a81ed45e080b437d6ffd9064d1b341a6f8f5f.scope. Mar 17 18:53:27.722586 env[1450]: time="2025-03-17T18:53:27.722535401Z" level=info msg="StartContainer for \"28f8b1dfb9eb187c9bf1c25f481a81ed45e080b437d6ffd9064d1b341a6f8f5f\" returns successfully" Mar 17 18:53:27.811865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447302401.mount: Deactivated successfully. Mar 17 18:53:28.646318 kubelet[1933]: E0317 18:53:28.646276 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:29.647491 kubelet[1933]: E0317 18:53:29.647457 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:30.647961 kubelet[1933]: E0317 18:53:30.647925 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:31.648255 kubelet[1933]: E0317 18:53:31.648209 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:32.648859 kubelet[1933]: E0317 18:53:32.648820 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:33.649534 kubelet[1933]: E0317 18:53:33.649483 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:34.650213 kubelet[1933]: E0317 18:53:34.650162 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:35.650903 kubelet[1933]: E0317 18:53:35.650839 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:36.651704 kubelet[1933]: E0317 18:53:36.651674 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:37.653174 kubelet[1933]: E0317 18:53:37.653128 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:38.653968 kubelet[1933]: E0317 18:53:38.653921 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:39.171341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001677941.mount: Deactivated successfully. Mar 17 18:53:39.654534 kubelet[1933]: E0317 18:53:39.654489 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:40.655500 kubelet[1933]: E0317 18:53:40.655419 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:41.456033 env[1450]: time="2025-03-17T18:53:41.455974166Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:41.478143 env[1450]: time="2025-03-17T18:53:41.478092216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:41.483525 env[1450]: time="2025-03-17T18:53:41.483479489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:41.484045 env[1450]: time="2025-03-17T18:53:41.484011568Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:53:41.486669 env[1450]: time="2025-03-17T18:53:41.486627285Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:53:41.507927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590029174.mount: Deactivated successfully. Mar 17 18:53:41.533420 env[1450]: time="2025-03-17T18:53:41.533350662Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\"" Mar 17 18:53:41.534199 env[1450]: time="2025-03-17T18:53:41.534160621Z" level=info msg="StartContainer for \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\"" Mar 17 18:53:41.553591 systemd[1]: Started cri-containerd-62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd.scope. Mar 17 18:53:41.588957 systemd[1]: cri-containerd-62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd.scope: Deactivated successfully. Mar 17 18:53:41.590038 env[1450]: time="2025-03-17T18:53:41.589984587Z" level=info msg="StartContainer for \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\" returns successfully" Mar 17 18:53:41.639554 kubelet[1933]: E0317 18:53:41.639506 1933 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:41.656081 kubelet[1933]: E0317 18:53:41.656038 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:41.822000 kubelet[1933]: I0317 18:53:41.821396 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rlqt" podStartSLOduration=19.225188595 podStartE2EDuration="20.821369558s" podCreationTimestamp="2025-03-17 18:53:21 +0000 UTC" firstStartedPulling="2025-03-17 18:53:26.008827026 +0000 UTC m=+5.812827176" lastFinishedPulling="2025-03-17 18:53:27.605007989 +0000 UTC m=+7.409008139" observedRunningTime="2025-03-17 18:53:27.792199932 +0000 UTC m=+7.596200082" watchObservedRunningTime="2025-03-17 18:53:41.821369558 +0000 UTC m=+21.625369748" Mar 17 18:53:42.502612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd-rootfs.mount: Deactivated successfully. Mar 17 18:53:42.657207 kubelet[1933]: E0317 18:53:42.657152 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:43.520038 env[1450]: time="2025-03-17T18:53:43.519974497Z" level=info msg="shim disconnected" id=62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd Mar 17 18:53:43.520038 env[1450]: time="2025-03-17T18:53:43.520036457Z" level=warning msg="cleaning up after shim disconnected" id=62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd namespace=k8s.io Mar 17 18:53:43.520038 env[1450]: time="2025-03-17T18:53:43.520046497Z" level=info msg="cleaning up dead shim" Mar 17 18:53:43.527544 env[1450]: time="2025-03-17T18:53:43.527487409Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2276 runtime=io.containerd.runc.v2\n" Mar 17 18:53:43.658100 kubelet[1933]: E0317 18:53:43.658067 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:43.808971 env[1450]: time="2025-03-17T18:53:43.808867678Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:53:43.847870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624522714.mount: Deactivated successfully. Mar 17 18:53:43.853666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286313300.mount: Deactivated successfully. Mar 17 18:53:43.869842 env[1450]: time="2025-03-17T18:53:43.869793487Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\"" Mar 17 18:53:43.870780 env[1450]: time="2025-03-17T18:53:43.870733606Z" level=info msg="StartContainer for \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\"" Mar 17 18:53:43.888553 systemd[1]: Started cri-containerd-bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8.scope. Mar 17 18:53:43.925978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:53:43.926185 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:53:43.926888 env[1450]: time="2025-03-17T18:53:43.926838620Z" level=info msg="StartContainer for \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\" returns successfully" Mar 17 18:53:43.927070 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:53:43.929522 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:53:43.935340 systemd[1]: cri-containerd-bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8.scope: Deactivated successfully. Mar 17 18:53:43.940061 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:53:43.980962 env[1450]: time="2025-03-17T18:53:43.980914277Z" level=info msg="shim disconnected" id=bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8 Mar 17 18:53:43.981237 env[1450]: time="2025-03-17T18:53:43.981209396Z" level=warning msg="cleaning up after shim disconnected" id=bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8 namespace=k8s.io Mar 17 18:53:43.981307 env[1450]: time="2025-03-17T18:53:43.981293516Z" level=info msg="cleaning up dead shim" Mar 17 18:53:43.988322 env[1450]: time="2025-03-17T18:53:43.988272388Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2342 runtime=io.containerd.runc.v2\n" Mar 17 18:53:44.658488 kubelet[1933]: E0317 18:53:44.658452 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:44.811872 env[1450]: time="2025-03-17T18:53:44.811776161Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:53:44.846708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8-rootfs.mount: Deactivated successfully. Mar 17 18:53:44.850916 env[1450]: time="2025-03-17T18:53:44.850867438Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\"" Mar 17 18:53:44.851818 env[1450]: time="2025-03-17T18:53:44.851782917Z" level=info msg="StartContainer for \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\"" Mar 17 18:53:44.874869 systemd[1]: Started cri-containerd-6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a.scope. Mar 17 18:53:44.904444 systemd[1]: cri-containerd-6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a.scope: Deactivated successfully. Mar 17 18:53:44.909411 env[1450]: time="2025-03-17T18:53:44.908929254Z" level=info msg="StartContainer for \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\" returns successfully" Mar 17 18:53:44.929359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a-rootfs.mount: Deactivated successfully. Mar 17 18:53:44.939278 env[1450]: time="2025-03-17T18:53:44.939234821Z" level=info msg="shim disconnected" id=6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a Mar 17 18:53:44.939522 env[1450]: time="2025-03-17T18:53:44.939493620Z" level=warning msg="cleaning up after shim disconnected" id=6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a namespace=k8s.io Mar 17 18:53:44.939616 env[1450]: time="2025-03-17T18:53:44.939601420Z" level=info msg="cleaning up dead shim" Mar 17 18:53:44.946609 env[1450]: time="2025-03-17T18:53:44.946561933Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2400 runtime=io.containerd.runc.v2\n" Mar 17 18:53:45.659541 kubelet[1933]: E0317 18:53:45.659504 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:45.814248 env[1450]: time="2025-03-17T18:53:45.814156074Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:53:45.846799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718547944.mount: Deactivated successfully. Mar 17 18:53:45.854880 env[1450]: time="2025-03-17T18:53:45.854828472Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\"" Mar 17 18:53:45.855441 env[1450]: time="2025-03-17T18:53:45.855400151Z" level=info msg="StartContainer for \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\"" Mar 17 18:53:45.876124 systemd[1]: run-containerd-runc-k8s.io-9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13-runc.SDxtxA.mount: Deactivated successfully. Mar 17 18:53:45.881415 systemd[1]: Started cri-containerd-9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13.scope. Mar 17 18:53:45.908703 systemd[1]: cri-containerd-9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13.scope: Deactivated successfully. Mar 17 18:53:45.911479 env[1450]: time="2025-03-17T18:53:45.910581054Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651bdb21_66a7_4e84_8e44_78d197cc2f79.slice/cri-containerd-9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13.scope/memory.events\": no such file or directory" Mar 17 18:53:45.916422 env[1450]: time="2025-03-17T18:53:45.916363009Z" level=info msg="StartContainer for \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\" returns successfully" Mar 17 18:53:45.948669 env[1450]: time="2025-03-17T18:53:45.948613735Z" level=info msg="shim disconnected" id=9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13 Mar 17 18:53:45.948669 env[1450]: time="2025-03-17T18:53:45.948664175Z" level=warning msg="cleaning up after shim disconnected" id=9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13 namespace=k8s.io Mar 17 18:53:45.948669 env[1450]: time="2025-03-17T18:53:45.948673215Z" level=info msg="cleaning up dead shim" Mar 17 18:53:45.956133 env[1450]: time="2025-03-17T18:53:45.956080448Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Mar 17 18:53:46.660295 kubelet[1933]: E0317 18:53:46.660229 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:46.818588 env[1450]: time="2025-03-17T18:53:46.818542411Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:53:46.846803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13-rootfs.mount: Deactivated successfully. Mar 17 18:53:46.850496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784350882.mount: Deactivated successfully. Mar 17 18:53:46.856149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212673115.mount: Deactivated successfully. Mar 17 18:53:46.871977 env[1450]: time="2025-03-17T18:53:46.871924759Z" level=info msg="CreateContainer within sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\"" Mar 17 18:53:46.872667 env[1450]: time="2025-03-17T18:53:46.872638799Z" level=info msg="StartContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\"" Mar 17 18:53:46.887289 systemd[1]: Started cri-containerd-f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09.scope. Mar 17 18:53:46.933463 env[1450]: time="2025-03-17T18:53:46.933406980Z" level=info msg="StartContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" returns successfully" Mar 17 18:53:47.001778 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:53:47.019091 kubelet[1933]: I0317 18:53:47.019060 1933 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:53:47.496777 kernel: Initializing XFRM netlink socket Mar 17 18:53:47.505798 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:53:47.660983 kubelet[1933]: E0317 18:53:47.660932 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:47.846923 kubelet[1933]: I0317 18:53:47.846786 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7lqgh" podStartSLOduration=11.372040682 podStartE2EDuration="26.846741668s" podCreationTimestamp="2025-03-17 18:53:21 +0000 UTC" firstStartedPulling="2025-03-17 18:53:26.01066722 +0000 UTC m=+5.814667370" lastFinishedPulling="2025-03-17 18:53:41.485368206 +0000 UTC m=+21.289368356" observedRunningTime="2025-03-17 18:53:47.837403356 +0000 UTC m=+27.641403506" watchObservedRunningTime="2025-03-17 18:53:47.846741668 +0000 UTC m=+27.650741818" Mar 17 18:53:48.103035 systemd[1]: Created slice kubepods-besteffort-podb9244723_9105_4c9e_ab5b_0aaee2c974aa.slice. Mar 17 18:53:48.130863 kubelet[1933]: I0317 18:53:48.130818 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65rf\" (UniqueName: \"kubernetes.io/projected/b9244723-9105-4c9e-ab5b-0aaee2c974aa-kube-api-access-l65rf\") pod \"nginx-deployment-8587fbcb89-62642\" (UID: \"b9244723-9105-4c9e-ab5b-0aaee2c974aa\") " pod="default/nginx-deployment-8587fbcb89-62642" Mar 17 18:53:48.406982 env[1450]: time="2025-03-17T18:53:48.406248183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-62642,Uid:b9244723-9105-4c9e-ab5b-0aaee2c974aa,Namespace:default,Attempt:0,}" Mar 17 18:53:48.661926 kubelet[1933]: E0317 18:53:48.661576 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:49.193915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:53:49.194731 systemd-networkd[1603]: cilium_host: Link UP Mar 17 18:53:49.195915 systemd-networkd[1603]: cilium_net: Link UP Mar 17 18:53:49.195920 systemd-networkd[1603]: cilium_net: Gained carrier Mar 17 18:53:49.196075 systemd-networkd[1603]: cilium_host: Gained carrier Mar 17 18:53:49.196289 systemd-networkd[1603]: cilium_host: Gained IPv6LL Mar 17 18:53:49.340026 systemd-networkd[1603]: cilium_vxlan: Link UP Mar 17 18:53:49.340034 systemd-networkd[1603]: cilium_vxlan: Gained carrier Mar 17 18:53:49.381934 systemd-networkd[1603]: cilium_net: Gained IPv6LL Mar 17 18:53:49.662403 kubelet[1933]: E0317 18:53:49.662244 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:49.688780 kernel: NET: Registered PF_ALG protocol family Mar 17 18:53:50.543087 systemd-networkd[1603]: lxc_health: Link UP Mar 17 18:53:50.571778 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:53:50.571827 systemd-networkd[1603]: lxc_health: Gained carrier Mar 17 18:53:50.663173 kubelet[1933]: E0317 18:53:50.663116 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:50.985665 systemd-networkd[1603]: lxc69b703126cc4: Link UP Mar 17 18:53:50.999794 kernel: eth0: renamed from tmpadf15 Mar 17 18:53:51.009097 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc69b703126cc4: link becomes ready Mar 17 18:53:51.008544 systemd-networkd[1603]: lxc69b703126cc4: Gained carrier Mar 17 18:53:51.053937 systemd-networkd[1603]: cilium_vxlan: Gained IPv6LL Mar 17 18:53:51.663979 kubelet[1933]: E0317 18:53:51.663934 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:51.693879 systemd-networkd[1603]: lxc_health: Gained IPv6LL Mar 17 18:53:52.664902 kubelet[1933]: E0317 18:53:52.664848 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:52.909916 systemd-networkd[1603]: lxc69b703126cc4: Gained IPv6LL Mar 17 18:53:53.665837 kubelet[1933]: E0317 18:53:53.665776 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:54.666733 kubelet[1933]: E0317 18:53:54.666674 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:54.891176 env[1450]: time="2025-03-17T18:53:54.891083288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:53:54.891176 env[1450]: time="2025-03-17T18:53:54.891130488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:53:54.891176 env[1450]: time="2025-03-17T18:53:54.891141848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:53:54.891829 env[1450]: time="2025-03-17T18:53:54.891773448Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99 pid=2989 runtime=io.containerd.runc.v2 Mar 17 18:53:54.910324 systemd[1]: run-containerd-runc-k8s.io-adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99-runc.rnef6w.mount: Deactivated successfully. Mar 17 18:53:54.913917 systemd[1]: Started cri-containerd-adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99.scope. Mar 17 18:53:54.949326 env[1450]: time="2025-03-17T18:53:54.949277375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-62642,Uid:b9244723-9105-4c9e-ab5b-0aaee2c974aa,Namespace:default,Attempt:0,} returns sandbox id \"adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99\"" Mar 17 18:53:54.951567 env[1450]: time="2025-03-17T18:53:54.951524534Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:53:55.667403 kubelet[1933]: E0317 18:53:55.667359 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:56.668438 kubelet[1933]: E0317 18:53:56.668348 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:57.452710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274959661.mount: Deactivated successfully. Mar 17 18:53:57.669050 kubelet[1933]: E0317 18:53:57.669006 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:58.669567 kubelet[1933]: E0317 18:53:58.669510 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:58.769458 env[1450]: time="2025-03-17T18:53:58.769388919Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:58.777291 env[1450]: time="2025-03-17T18:53:58.777240196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:58.783160 env[1450]: time="2025-03-17T18:53:58.783102313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:58.789179 env[1450]: time="2025-03-17T18:53:58.789121391Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:53:58.790220 env[1450]: time="2025-03-17T18:53:58.790185550Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:53:58.793061 env[1450]: time="2025-03-17T18:53:58.793017909Z" level=info msg="CreateContainer within sandbox \"adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:53:58.822254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732963897.mount: Deactivated successfully. Mar 17 18:53:58.829535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021417496.mount: Deactivated successfully. Mar 17 18:53:58.849674 env[1450]: time="2025-03-17T18:53:58.849601724Z" level=info msg="CreateContainer within sandbox \"adf1577ae4c768e3c7fcd50567966aefdd62ff8af573343cec62d13fa9c34d99\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4c337e74bdae10f71243b7c9c4523e7fee72b47d137089f573f8674c96ae7f02\"" Mar 17 18:53:58.850605 env[1450]: time="2025-03-17T18:53:58.850561243Z" level=info msg="StartContainer for \"4c337e74bdae10f71243b7c9c4523e7fee72b47d137089f573f8674c96ae7f02\"" Mar 17 18:53:58.870445 systemd[1]: Started cri-containerd-4c337e74bdae10f71243b7c9c4523e7fee72b47d137089f573f8674c96ae7f02.scope. Mar 17 18:53:58.907867 env[1450]: time="2025-03-17T18:53:58.907813498Z" level=info msg="StartContainer for \"4c337e74bdae10f71243b7c9c4523e7fee72b47d137089f573f8674c96ae7f02\" returns successfully" Mar 17 18:53:59.669943 kubelet[1933]: E0317 18:53:59.669884 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:53:59.861893 kubelet[1933]: I0317 18:53:59.861825 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-62642" podStartSLOduration=8.021479034 podStartE2EDuration="11.86180549s" podCreationTimestamp="2025-03-17 18:53:48 +0000 UTC" firstStartedPulling="2025-03-17 18:53:54.950884734 +0000 UTC m=+34.754884884" lastFinishedPulling="2025-03-17 18:53:58.79121123 +0000 UTC m=+38.595211340" observedRunningTime="2025-03-17 18:53:59.861213477 +0000 UTC m=+39.665213627" watchObservedRunningTime="2025-03-17 18:53:59.86180549 +0000 UTC m=+39.665805640" Mar 17 18:54:00.670254 kubelet[1933]: E0317 18:54:00.670201 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:01.638915 kubelet[1933]: E0317 18:54:01.638874 1933 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:01.671340 kubelet[1933]: E0317 18:54:01.671311 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:02.671637 kubelet[1933]: E0317 18:54:02.671587 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:03.671927 kubelet[1933]: E0317 18:54:03.671887 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:04.290126 systemd[1]: Created slice kubepods-besteffort-pod19e8aaab_30b0_4c97_a2a5_6df3c4249088.slice. Mar 17 18:54:04.325220 kubelet[1933]: I0317 18:54:04.325167 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nj96\" (UniqueName: \"kubernetes.io/projected/19e8aaab-30b0-4c97-a2a5-6df3c4249088-kube-api-access-9nj96\") pod \"nfs-server-provisioner-0\" (UID: \"19e8aaab-30b0-4c97-a2a5-6df3c4249088\") " pod="default/nfs-server-provisioner-0" Mar 17 18:54:04.325220 kubelet[1933]: I0317 18:54:04.325223 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/19e8aaab-30b0-4c97-a2a5-6df3c4249088-data\") pod \"nfs-server-provisioner-0\" (UID: \"19e8aaab-30b0-4c97-a2a5-6df3c4249088\") " pod="default/nfs-server-provisioner-0" Mar 17 18:54:04.595101 env[1450]: time="2025-03-17T18:54:04.594712478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:19e8aaab-30b0-4c97-a2a5-6df3c4249088,Namespace:default,Attempt:0,}" Mar 17 18:54:04.673208 kubelet[1933]: E0317 18:54:04.673169 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:04.677854 systemd-networkd[1603]: lxce8233e514290: Link UP Mar 17 18:54:04.690787 kernel: eth0: renamed from tmp3397b Mar 17 18:54:04.706249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:54:04.706356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce8233e514290: link becomes ready Mar 17 18:54:04.707139 systemd-networkd[1603]: lxce8233e514290: Gained carrier Mar 17 18:54:04.898101 env[1450]: time="2025-03-17T18:54:04.897558864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:04.898101 env[1450]: time="2025-03-17T18:54:04.897661082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:04.898101 env[1450]: time="2025-03-17T18:54:04.897687527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:04.898383 env[1450]: time="2025-03-17T18:54:04.898342083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1 pid=3118 runtime=io.containerd.runc.v2 Mar 17 18:54:04.915914 systemd[1]: Started cri-containerd-3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1.scope. Mar 17 18:54:04.952147 env[1450]: time="2025-03-17T18:54:04.952102798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:19e8aaab-30b0-4c97-a2a5-6df3c4249088,Namespace:default,Attempt:0,} returns sandbox id \"3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1\"" Mar 17 18:54:04.954408 env[1450]: time="2025-03-17T18:54:04.954370241Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:54:05.437438 systemd[1]: run-containerd-runc-k8s.io-3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1-runc.qImvOg.mount: Deactivated successfully. Mar 17 18:54:05.674169 kubelet[1933]: E0317 18:54:05.673973 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:06.606921 systemd-networkd[1603]: lxce8233e514290: Gained IPv6LL Mar 17 18:54:06.675262 kubelet[1933]: E0317 18:54:06.675203 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:07.352455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009087460.mount: Deactivated successfully. Mar 17 18:54:07.675692 kubelet[1933]: E0317 18:54:07.675356 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:08.676045 kubelet[1933]: E0317 18:54:08.676004 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:09.563946 env[1450]: time="2025-03-17T18:54:09.563892216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:09.572828 env[1450]: time="2025-03-17T18:54:09.572779113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:09.580324 env[1450]: time="2025-03-17T18:54:09.580276074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:09.587985 env[1450]: time="2025-03-17T18:54:09.587943022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:09.588846 env[1450]: time="2025-03-17T18:54:09.588810116Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 18:54:09.591588 env[1450]: time="2025-03-17T18:54:09.591545780Z" level=info msg="CreateContainer within sandbox \"3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:54:09.618309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806370172.mount: Deactivated successfully. Mar 17 18:54:09.623717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490025722.mount: Deactivated successfully. Mar 17 18:54:09.644617 env[1450]: time="2025-03-17T18:54:09.644568312Z" level=info msg="CreateContainer within sandbox \"3397bb99238edbb1d4b666268a160216cd5466325a4e69b6bc66fbf68ecd4fa1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"adcfc0b829d5b91dd25836dcec6399c91c8cc373cb8b3a79970c0c6f0629e45c\"" Mar 17 18:54:09.645606 env[1450]: time="2025-03-17T18:54:09.645574548Z" level=info msg="StartContainer for \"adcfc0b829d5b91dd25836dcec6399c91c8cc373cb8b3a79970c0c6f0629e45c\"" Mar 17 18:54:09.663216 systemd[1]: Started cri-containerd-adcfc0b829d5b91dd25836dcec6399c91c8cc373cb8b3a79970c0c6f0629e45c.scope. Mar 17 18:54:09.676907 kubelet[1933]: E0317 18:54:09.676301 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:09.696463 env[1450]: time="2025-03-17T18:54:09.696402461Z" level=info msg="StartContainer for \"adcfc0b829d5b91dd25836dcec6399c91c8cc373cb8b3a79970c0c6f0629e45c\" returns successfully" Mar 17 18:54:10.676873 kubelet[1933]: E0317 18:54:10.676829 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:11.677401 kubelet[1933]: E0317 18:54:11.677316 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:12.678123 kubelet[1933]: E0317 18:54:12.678085 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:13.679554 kubelet[1933]: E0317 18:54:13.679480 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:14.679690 kubelet[1933]: E0317 18:54:14.679641 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:15.680428 kubelet[1933]: E0317 18:54:15.680389 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:16.681466 kubelet[1933]: E0317 18:54:16.681429 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:17.682284 kubelet[1933]: E0317 18:54:17.682245 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:18.683295 kubelet[1933]: E0317 18:54:18.683238 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:19.684185 kubelet[1933]: E0317 18:54:19.684142 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:19.905615 kubelet[1933]: I0317 18:54:19.905555 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.269095393 podStartE2EDuration="15.905538272s" podCreationTimestamp="2025-03-17 18:54:04 +0000 UTC" firstStartedPulling="2025-03-17 18:54:04.953719086 +0000 UTC m=+44.757719236" lastFinishedPulling="2025-03-17 18:54:09.590161965 +0000 UTC m=+49.394162115" observedRunningTime="2025-03-17 18:54:09.891851094 +0000 UTC m=+49.695851244" watchObservedRunningTime="2025-03-17 18:54:19.905538272 +0000 UTC m=+59.709538422" Mar 17 18:54:19.910666 systemd[1]: Created slice kubepods-besteffort-podca1cadcc_127a_42fd_9106_550d94a597dd.slice. Mar 17 18:54:19.917152 kubelet[1933]: I0317 18:54:19.917112 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c2a2cb2-f1ab-4d3d-b72e-713656f7ebf2\" (UniqueName: \"kubernetes.io/nfs/ca1cadcc-127a-42fd-9106-550d94a597dd-pvc-7c2a2cb2-f1ab-4d3d-b72e-713656f7ebf2\") pod \"test-pod-1\" (UID: \"ca1cadcc-127a-42fd-9106-550d94a597dd\") " pod="default/test-pod-1" Mar 17 18:54:19.917152 kubelet[1933]: I0317 18:54:19.917153 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnj6j\" (UniqueName: \"kubernetes.io/projected/ca1cadcc-127a-42fd-9106-550d94a597dd-kube-api-access-wnj6j\") pod \"test-pod-1\" (UID: \"ca1cadcc-127a-42fd-9106-550d94a597dd\") " pod="default/test-pod-1" Mar 17 18:54:20.144778 kernel: FS-Cache: Loaded Mar 17 18:54:20.241766 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:54:20.245759 kernel: RPC: Registered udp transport module. Mar 17 18:54:20.255242 kernel: RPC: Registered tcp transport module. Mar 17 18:54:20.255318 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:54:20.353782 kernel: FS-Cache: Netfs 'nfs' registered for caching Mar 17 18:54:20.586777 kernel: NFS: Registering the id_resolver key type Mar 17 18:54:20.586906 kernel: Key type id_resolver registered Mar 17 18:54:20.590181 kernel: Key type id_legacy registered Mar 17 18:54:20.684635 kubelet[1933]: E0317 18:54:20.684590 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:20.956303 nfsidmap[3231]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-a-95dfbd75e4' Mar 17 18:54:20.965241 nfsidmap[3232]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-a-95dfbd75e4' Mar 17 18:54:21.113724 env[1450]: time="2025-03-17T18:54:21.113672791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ca1cadcc-127a-42fd-9106-550d94a597dd,Namespace:default,Attempt:0,}" Mar 17 18:54:21.193209 systemd-networkd[1603]: lxccbf0ab5c89e8: Link UP Mar 17 18:54:21.201895 kernel: eth0: renamed from tmpb1ba7 Mar 17 18:54:21.215297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:54:21.215427 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbf0ab5c89e8: link becomes ready Mar 17 18:54:21.216051 systemd-networkd[1603]: lxccbf0ab5c89e8: Gained carrier Mar 17 18:54:21.401115 env[1450]: time="2025-03-17T18:54:21.401018890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:21.401115 env[1450]: time="2025-03-17T18:54:21.401063255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:21.401115 env[1450]: time="2025-03-17T18:54:21.401073376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:21.401475 env[1450]: time="2025-03-17T18:54:21.401429176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f pid=3259 runtime=io.containerd.runc.v2 Mar 17 18:54:21.415481 systemd[1]: Started cri-containerd-b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f.scope. Mar 17 18:54:21.422698 systemd[1]: run-containerd-runc-k8s.io-b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f-runc.5WEwUH.mount: Deactivated successfully. Mar 17 18:54:21.455264 env[1450]: time="2025-03-17T18:54:21.455222245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ca1cadcc-127a-42fd-9106-550d94a597dd,Namespace:default,Attempt:0,} returns sandbox id \"b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f\"" Mar 17 18:54:21.457598 env[1450]: time="2025-03-17T18:54:21.457543867Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:54:21.639644 kubelet[1933]: E0317 18:54:21.639078 1933 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:21.685019 kubelet[1933]: E0317 18:54:21.684983 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:21.873102 env[1450]: time="2025-03-17T18:54:21.873059147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:21.881157 env[1450]: time="2025-03-17T18:54:21.881116016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:21.922033 env[1450]: time="2025-03-17T18:54:21.921913258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:21.930598 env[1450]: time="2025-03-17T18:54:21.930540472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:21.931522 env[1450]: time="2025-03-17T18:54:21.931486979Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:54:21.934073 env[1450]: time="2025-03-17T18:54:21.934033066Z" level=info msg="CreateContainer within sandbox \"b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:54:21.975589 env[1450]: time="2025-03-17T18:54:21.975517906Z" level=info msg="CreateContainer within sandbox \"b1ba70129c2f873547cd138db2d24fef82fbb577e58e0e3f2cbb0d310e559c6f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2c83e608ba5e8f01917f9b69dd6cc21e4ca1ec04983524157c5e2eb832bc3581\"" Mar 17 18:54:21.976520 env[1450]: time="2025-03-17T18:54:21.976486616Z" level=info msg="StartContainer for \"2c83e608ba5e8f01917f9b69dd6cc21e4ca1ec04983524157c5e2eb832bc3581\"" Mar 17 18:54:21.990980 systemd[1]: Started cri-containerd-2c83e608ba5e8f01917f9b69dd6cc21e4ca1ec04983524157c5e2eb832bc3581.scope. Mar 17 18:54:22.021795 env[1450]: time="2025-03-17T18:54:22.021720259Z" level=info msg="StartContainer for \"2c83e608ba5e8f01917f9b69dd6cc21e4ca1ec04983524157c5e2eb832bc3581\" returns successfully" Mar 17 18:54:22.477937 systemd-networkd[1603]: lxccbf0ab5c89e8: Gained IPv6LL Mar 17 18:54:22.685529 kubelet[1933]: E0317 18:54:22.685449 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:23.686516 kubelet[1933]: E0317 18:54:23.686476 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:24.687190 kubelet[1933]: E0317 18:54:24.687151 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:25.688653 kubelet[1933]: E0317 18:54:25.688615 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:26.689773 kubelet[1933]: E0317 18:54:26.689718 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:27.365625 kubelet[1933]: I0317 18:54:27.365561 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.889900236 podStartE2EDuration="22.365535618s" podCreationTimestamp="2025-03-17 18:54:05 +0000 UTC" firstStartedPulling="2025-03-17 18:54:21.456869831 +0000 UTC m=+61.260869981" lastFinishedPulling="2025-03-17 18:54:21.932505213 +0000 UTC m=+61.736505363" observedRunningTime="2025-03-17 18:54:22.936075947 +0000 UTC m=+62.740076097" watchObservedRunningTime="2025-03-17 18:54:27.365535618 +0000 UTC m=+67.169535768" Mar 17 18:54:27.389799 systemd[1]: run-containerd-runc-k8s.io-f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09-runc.n618lV.mount: Deactivated successfully. Mar 17 18:54:27.403181 env[1450]: time="2025-03-17T18:54:27.403119667Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:54:27.408813 env[1450]: time="2025-03-17T18:54:27.408726332Z" level=info msg="StopContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" with timeout 2 (s)" Mar 17 18:54:27.409216 env[1450]: time="2025-03-17T18:54:27.409168614Z" level=info msg="Stop container \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" with signal terminated" Mar 17 18:54:27.415274 systemd-networkd[1603]: lxc_health: Link DOWN Mar 17 18:54:27.415284 systemd-networkd[1603]: lxc_health: Lost carrier Mar 17 18:54:27.436112 systemd[1]: cri-containerd-f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09.scope: Deactivated successfully. Mar 17 18:54:27.436549 systemd[1]: cri-containerd-f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09.scope: Consumed 6.650s CPU time. Mar 17 18:54:27.454726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09-rootfs.mount: Deactivated successfully. Mar 17 18:54:27.690200 kubelet[1933]: E0317 18:54:27.690162 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:28.353643 env[1450]: time="2025-03-17T18:54:28.353583039Z" level=info msg="shim disconnected" id=f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09 Mar 17 18:54:28.353643 env[1450]: time="2025-03-17T18:54:28.353639244Z" level=warning msg="cleaning up after shim disconnected" id=f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09 namespace=k8s.io Mar 17 18:54:28.353643 env[1450]: time="2025-03-17T18:54:28.353650405Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.361897 env[1450]: time="2025-03-17T18:54:28.361844982Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3392 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.368125 env[1450]: time="2025-03-17T18:54:28.368060651Z" level=info msg="StopContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" returns successfully" Mar 17 18:54:28.369170 env[1450]: time="2025-03-17T18:54:28.369121351Z" level=info msg="StopPodSandbox for \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\"" Mar 17 18:54:28.369273 env[1450]: time="2025-03-17T18:54:28.369209239Z" level=info msg="Container to stop \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.369273 env[1450]: time="2025-03-17T18:54:28.369228441Z" level=info msg="Container to stop \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.369273 env[1450]: time="2025-03-17T18:54:28.369245523Z" level=info msg="Container to stop \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.369273 env[1450]: time="2025-03-17T18:54:28.369262124Z" level=info msg="Container to stop \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.371226 env[1450]: time="2025-03-17T18:54:28.369277006Z" level=info msg="Container to stop \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.371113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e-shm.mount: Deactivated successfully. Mar 17 18:54:28.380252 systemd[1]: cri-containerd-bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e.scope: Deactivated successfully. Mar 17 18:54:28.404093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e-rootfs.mount: Deactivated successfully. Mar 17 18:54:28.431977 env[1450]: time="2025-03-17T18:54:28.431907900Z" level=info msg="shim disconnected" id=bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e Mar 17 18:54:28.431977 env[1450]: time="2025-03-17T18:54:28.431972266Z" level=warning msg="cleaning up after shim disconnected" id=bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e namespace=k8s.io Mar 17 18:54:28.431977 env[1450]: time="2025-03-17T18:54:28.431985027Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.439723 env[1450]: time="2025-03-17T18:54:28.439671395Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3421 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.440059 env[1450]: time="2025-03-17T18:54:28.440004467Z" level=info msg="TearDown network for sandbox \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" successfully" Mar 17 18:54:28.440059 env[1450]: time="2025-03-17T18:54:28.440033830Z" level=info msg="StopPodSandbox for \"bf182bf71da305446403a7b9157438ffbc998d1caf3e7fffee346d86620f423e\" returns successfully" Mar 17 18:54:28.566274 kubelet[1933]: I0317 18:54:28.566228 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-xtables-lock\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566274 kubelet[1933]: I0317 18:54:28.566276 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cni-path\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566298 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dsns\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-kube-api-access-2dsns\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566342 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-net\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566362 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651bdb21-66a7-4e84-8e44-78d197cc2f79-clustermesh-secrets\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566378 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-kernel\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566392 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-hostproc\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566473 kubelet[1933]: I0317 18:54:28.566407 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-lib-modules\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566421 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-bpf-maps\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566434 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-run\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566449 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-etc-cni-netd\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566466 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-config-path\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566480 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-cgroup\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566614 kubelet[1933]: I0317 18:54:28.566497 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-hubble-tls\") pod \"651bdb21-66a7-4e84-8e44-78d197cc2f79\" (UID: \"651bdb21-66a7-4e84-8e44-78d197cc2f79\") " Mar 17 18:54:28.566851 kubelet[1933]: I0317 18:54:28.566830 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-hostproc" (OuterVolumeSpecName: "hostproc") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.566970 kubelet[1933]: I0317 18:54:28.566955 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.567060 kubelet[1933]: I0317 18:54:28.567048 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cni-path" (OuterVolumeSpecName: "cni-path") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.569495 kubelet[1933]: I0317 18:54:28.567159 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.569629 kubelet[1933]: I0317 18:54:28.567175 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.569705 kubelet[1933]: I0317 18:54:28.567206 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.569793 kubelet[1933]: I0317 18:54:28.567222 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.569861 kubelet[1933]: I0317 18:54:28.569407 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:28.569925 kubelet[1933]: I0317 18:54:28.569455 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.570590 kubelet[1933]: I0317 18:54:28.570567 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.570710 systemd[1]: var-lib-kubelet-pods-651bdb21\x2d66a7\x2d4e84\x2d8e44\x2d78d197cc2f79-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:28.571150 kubelet[1933]: I0317 18:54:28.570861 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.573817 kubelet[1933]: I0317 18:54:28.573780 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:28.575641 systemd[1]: var-lib-kubelet-pods-651bdb21\x2d66a7\x2d4e84\x2d8e44\x2d78d197cc2f79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dsns.mount: Deactivated successfully. Mar 17 18:54:28.576961 kubelet[1933]: I0317 18:54:28.576923 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-kube-api-access-2dsns" (OuterVolumeSpecName: "kube-api-access-2dsns") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "kube-api-access-2dsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:28.579996 kubelet[1933]: I0317 18:54:28.579964 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651bdb21-66a7-4e84-8e44-78d197cc2f79-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "651bdb21-66a7-4e84-8e44-78d197cc2f79" (UID: "651bdb21-66a7-4e84-8e44-78d197cc2f79"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:28.580955 systemd[1]: var-lib-kubelet-pods-651bdb21\x2d66a7\x2d4e84\x2d8e44\x2d78d197cc2f79-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:28.667759 kubelet[1933]: I0317 18:54:28.667660 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-cgroup\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.667913 kubelet[1933]: I0317 18:54:28.667901 1933 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-hubble-tls\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668007 kubelet[1933]: I0317 18:54:28.667990 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-config-path\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668081 kubelet[1933]: I0317 18:54:28.668072 1933 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-net\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668147 kubelet[1933]: I0317 18:54:28.668130 1933 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651bdb21-66a7-4e84-8e44-78d197cc2f79-clustermesh-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668208 kubelet[1933]: I0317 18:54:28.668199 1933 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-xtables-lock\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668276 kubelet[1933]: I0317 18:54:28.668257 1933 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cni-path\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668336 kubelet[1933]: I0317 18:54:28.668327 1933 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2dsns\" (UniqueName: \"kubernetes.io/projected/651bdb21-66a7-4e84-8e44-78d197cc2f79-kube-api-access-2dsns\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668399 kubelet[1933]: I0317 18:54:28.668382 1933 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-lib-modules\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668459 kubelet[1933]: I0317 18:54:28.668449 1933 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-host-proc-sys-kernel\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668519 kubelet[1933]: I0317 18:54:28.668503 1933 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-hostproc\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668579 kubelet[1933]: I0317 18:54:28.668570 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-cilium-run\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668644 kubelet[1933]: I0317 18:54:28.668627 1933 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-etc-cni-netd\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.668702 kubelet[1933]: I0317 18:54:28.668693 1933 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651bdb21-66a7-4e84-8e44-78d197cc2f79-bpf-maps\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:28.691267 kubelet[1933]: E0317 18:54:28.691246 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:28.932686 kubelet[1933]: I0317 18:54:28.932658 1933 scope.go:117] "RemoveContainer" containerID="f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09" Mar 17 18:54:28.936589 systemd[1]: Removed slice kubepods-burstable-pod651bdb21_66a7_4e84_8e44_78d197cc2f79.slice. Mar 17 18:54:28.936675 systemd[1]: kubepods-burstable-pod651bdb21_66a7_4e84_8e44_78d197cc2f79.slice: Consumed 6.743s CPU time. Mar 17 18:54:28.938826 env[1450]: time="2025-03-17T18:54:28.938782523Z" level=info msg="RemoveContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\"" Mar 17 18:54:28.957007 env[1450]: time="2025-03-17T18:54:28.956957445Z" level=info msg="RemoveContainer for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" returns successfully" Mar 17 18:54:28.957287 kubelet[1933]: I0317 18:54:28.957261 1933 scope.go:117] "RemoveContainer" containerID="9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13" Mar 17 18:54:28.958319 env[1450]: time="2025-03-17T18:54:28.958285851Z" level=info msg="RemoveContainer for \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\"" Mar 17 18:54:28.975382 env[1450]: time="2025-03-17T18:54:28.975323665Z" level=info msg="RemoveContainer for \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\" returns successfully" Mar 17 18:54:28.975718 kubelet[1933]: I0317 18:54:28.975699 1933 scope.go:117] "RemoveContainer" containerID="6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a" Mar 17 18:54:28.977072 env[1450]: time="2025-03-17T18:54:28.977036067Z" level=info msg="RemoveContainer for \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\"" Mar 17 18:54:28.987902 env[1450]: time="2025-03-17T18:54:28.987848531Z" level=info msg="RemoveContainer for \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\" returns successfully" Mar 17 18:54:28.988132 kubelet[1933]: I0317 18:54:28.988099 1933 scope.go:117] "RemoveContainer" containerID="bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8" Mar 17 18:54:28.989307 env[1450]: time="2025-03-17T18:54:28.989271346Z" level=info msg="RemoveContainer for \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\"" Mar 17 18:54:29.005105 env[1450]: time="2025-03-17T18:54:29.005057834Z" level=info msg="RemoveContainer for \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\" returns successfully" Mar 17 18:54:29.005321 kubelet[1933]: I0317 18:54:29.005293 1933 scope.go:117] "RemoveContainer" containerID="62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd" Mar 17 18:54:29.006331 env[1450]: time="2025-03-17T18:54:29.006304750Z" level=info msg="RemoveContainer for \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\"" Mar 17 18:54:29.020460 env[1450]: time="2025-03-17T18:54:29.020417055Z" level=info msg="RemoveContainer for \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\" returns successfully" Mar 17 18:54:29.020835 kubelet[1933]: I0317 18:54:29.020807 1933 scope.go:117] "RemoveContainer" containerID="f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09" Mar 17 18:54:29.021301 env[1450]: time="2025-03-17T18:54:29.021223689Z" level=error msg="ContainerStatus for \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\": not found" Mar 17 18:54:29.021482 kubelet[1933]: E0317 18:54:29.021419 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\": not found" containerID="f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09" Mar 17 18:54:29.021577 kubelet[1933]: I0317 18:54:29.021490 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09"} err="failed to get container status \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8c7b802c3f086260173f58771ff40f2a9e15aa87a338e8f485f82182bf3ac09\": not found" Mar 17 18:54:29.021577 kubelet[1933]: I0317 18:54:29.021576 1933 scope.go:117] "RemoveContainer" containerID="9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13" Mar 17 18:54:29.021846 env[1450]: time="2025-03-17T18:54:29.021795622Z" level=error msg="ContainerStatus for \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\": not found" Mar 17 18:54:29.022031 kubelet[1933]: E0317 18:54:29.022006 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\": not found" containerID="9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13" Mar 17 18:54:29.022100 kubelet[1933]: I0317 18:54:29.022029 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13"} err="failed to get container status \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ae6bf251fd9f5ada9aa12c6c9fe12ef08dd788a398ecb616313ecf0b40cfd13\": not found" Mar 17 18:54:29.022100 kubelet[1933]: I0317 18:54:29.022048 1933 scope.go:117] "RemoveContainer" containerID="6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a" Mar 17 18:54:29.022309 env[1450]: time="2025-03-17T18:54:29.022264866Z" level=error msg="ContainerStatus for \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\": not found" Mar 17 18:54:29.022483 kubelet[1933]: E0317 18:54:29.022461 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\": not found" containerID="6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a" Mar 17 18:54:29.022550 kubelet[1933]: I0317 18:54:29.022483 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a"} err="failed to get container status \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d1467d1d8799c19bb75b62730f231ced7dc61ddd9e584878ae38f7917f0353a\": not found" Mar 17 18:54:29.022550 kubelet[1933]: I0317 18:54:29.022497 1933 scope.go:117] "RemoveContainer" containerID="bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8" Mar 17 18:54:29.022776 env[1450]: time="2025-03-17T18:54:29.022716067Z" level=error msg="ContainerStatus for \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\": not found" Mar 17 18:54:29.022930 kubelet[1933]: E0317 18:54:29.022909 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\": not found" containerID="bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8" Mar 17 18:54:29.023003 kubelet[1933]: I0317 18:54:29.022929 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8"} err="failed to get container status \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdc91e174e464156d33b8b280e458a50c2c11f2d3b5ffc81e1222e6bfd6459f8\": not found" Mar 17 18:54:29.023003 kubelet[1933]: I0317 18:54:29.022941 1933 scope.go:117] "RemoveContainer" containerID="62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd" Mar 17 18:54:29.023203 env[1450]: time="2025-03-17T18:54:29.023163829Z" level=error msg="ContainerStatus for \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\": not found" Mar 17 18:54:29.023360 kubelet[1933]: E0317 18:54:29.023340 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\": not found" containerID="62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd" Mar 17 18:54:29.023428 kubelet[1933]: I0317 18:54:29.023359 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd"} err="failed to get container status \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"62268ee589935d777dea9dad1307b51f41018d96461117923a09f931559f32bd\": not found" Mar 17 18:54:29.692401 kubelet[1933]: E0317 18:54:29.692359 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:29.755022 kubelet[1933]: I0317 18:54:29.754985 1933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" path="/var/lib/kubelet/pods/651bdb21-66a7-4e84-8e44-78d197cc2f79/volumes" Mar 17 18:54:30.693472 kubelet[1933]: E0317 18:54:30.693421 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:30.970061 kubelet[1933]: E0317 18:54:30.969956 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="mount-cgroup" Mar 17 18:54:30.970061 kubelet[1933]: E0317 18:54:30.969994 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="apply-sysctl-overwrites" Mar 17 18:54:30.970061 kubelet[1933]: E0317 18:54:30.970003 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="mount-bpf-fs" Mar 17 18:54:30.970061 kubelet[1933]: E0317 18:54:30.970009 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="clean-cilium-state" Mar 17 18:54:30.970061 kubelet[1933]: E0317 18:54:30.970016 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="cilium-agent" Mar 17 18:54:30.970061 kubelet[1933]: I0317 18:54:30.970035 1933 memory_manager.go:354] "RemoveStaleState removing state" podUID="651bdb21-66a7-4e84-8e44-78d197cc2f79" containerName="cilium-agent" Mar 17 18:54:30.975438 systemd[1]: Created slice kubepods-besteffort-pod6054e6b9_98fd_4e0c_abee_7a4acf25cffd.slice. Mar 17 18:54:31.002225 systemd[1]: Created slice kubepods-burstable-poded46be82_2be7_4035_aa3e_cd6a9b80c5b9.slice. Mar 17 18:54:31.078950 kubelet[1933]: I0317 18:54:31.078917 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6054e6b9-98fd-4e0c-abee-7a4acf25cffd-cilium-config-path\") pod \"cilium-operator-5d85765b45-62j7b\" (UID: \"6054e6b9-98fd-4e0c-abee-7a4acf25cffd\") " pod="kube-system/cilium-operator-5d85765b45-62j7b" Mar 17 18:54:31.079157 kubelet[1933]: I0317 18:54:31.079141 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zp7\" (UniqueName: \"kubernetes.io/projected/6054e6b9-98fd-4e0c-abee-7a4acf25cffd-kube-api-access-v5zp7\") pod \"cilium-operator-5d85765b45-62j7b\" (UID: \"6054e6b9-98fd-4e0c-abee-7a4acf25cffd\") " pod="kube-system/cilium-operator-5d85765b45-62j7b" Mar 17 18:54:31.179827 kubelet[1933]: I0317 18:54:31.179802 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hostproc\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.180835 kubelet[1933]: I0317 18:54:31.180812 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-xtables-lock\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.180968 kubelet[1933]: I0317 18:54:31.180954 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-run\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181062 kubelet[1933]: I0317 18:54:31.181048 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cni-path\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181142 kubelet[1933]: I0317 18:54:31.181130 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-clustermesh-secrets\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181237 kubelet[1933]: I0317 18:54:31.181214 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-config-path\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181323 kubelet[1933]: I0317 18:54:31.181304 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-net\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181419 kubelet[1933]: I0317 18:54:31.181407 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-kernel\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181485 kubelet[1933]: I0317 18:54:31.181474 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hubble-tls\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181551 kubelet[1933]: I0317 18:54:31.181539 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-ipsec-secrets\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181623 kubelet[1933]: I0317 18:54:31.181612 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-cgroup\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181709 kubelet[1933]: I0317 18:54:31.181687 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-etc-cni-netd\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181807 kubelet[1933]: I0317 18:54:31.181793 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4nj\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-kube-api-access-5f4nj\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181907 kubelet[1933]: I0317 18:54:31.181891 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-bpf-maps\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.181981 kubelet[1933]: I0317 18:54:31.181969 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-lib-modules\") pod \"cilium-wlxbv\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " pod="kube-system/cilium-wlxbv" Mar 17 18:54:31.278570 env[1450]: time="2025-03-17T18:54:31.277848272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-62j7b,Uid:6054e6b9-98fd-4e0c-abee-7a4acf25cffd,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:31.327981 env[1450]: time="2025-03-17T18:54:31.327900884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:31.327981 env[1450]: time="2025-03-17T18:54:31.327945408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:31.327981 env[1450]: time="2025-03-17T18:54:31.327956368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:31.328404 env[1450]: time="2025-03-17T18:54:31.328361964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b4a62fd14772fcbc5ea6c20fe64cd3dab45529b885e093cadfb0424fba7fb6d pid=3450 runtime=io.containerd.runc.v2 Mar 17 18:54:31.339385 systemd[1]: Started cri-containerd-1b4a62fd14772fcbc5ea6c20fe64cd3dab45529b885e093cadfb0424fba7fb6d.scope. Mar 17 18:54:31.373396 env[1450]: time="2025-03-17T18:54:31.373321567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-62j7b,Uid:6054e6b9-98fd-4e0c-abee-7a4acf25cffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b4a62fd14772fcbc5ea6c20fe64cd3dab45529b885e093cadfb0424fba7fb6d\"" Mar 17 18:54:31.375430 env[1450]: time="2025-03-17T18:54:31.375397830Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:54:31.613035 env[1450]: time="2025-03-17T18:54:31.612928727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlxbv,Uid:ed46be82-2be7-4035-aa3e-cd6a9b80c5b9,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:31.649251 env[1450]: time="2025-03-17T18:54:31.649167161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:31.649251 env[1450]: time="2025-03-17T18:54:31.649209125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:31.649480 env[1450]: time="2025-03-17T18:54:31.649445346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:31.649820 env[1450]: time="2025-03-17T18:54:31.649736691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4 pid=3492 runtime=io.containerd.runc.v2 Mar 17 18:54:31.659710 systemd[1]: Started cri-containerd-faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4.scope. Mar 17 18:54:31.684588 env[1450]: time="2025-03-17T18:54:31.684538519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlxbv,Uid:ed46be82-2be7-4035-aa3e-cd6a9b80c5b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\"" Mar 17 18:54:31.687915 env[1450]: time="2025-03-17T18:54:31.687878373Z" level=info msg="CreateContainer within sandbox \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:54:31.694499 kubelet[1933]: E0317 18:54:31.694444 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:31.733352 env[1450]: time="2025-03-17T18:54:31.733294536Z" level=info msg="CreateContainer within sandbox \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\"" Mar 17 18:54:31.734212 env[1450]: time="2025-03-17T18:54:31.734184815Z" level=info msg="StartContainer for \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\"" Mar 17 18:54:31.748390 systemd[1]: Started cri-containerd-daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a.scope. Mar 17 18:54:31.758984 systemd[1]: cri-containerd-daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a.scope: Deactivated successfully. Mar 17 18:54:31.766781 kubelet[1933]: E0317 18:54:31.766717 1933 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:31.833310 env[1450]: time="2025-03-17T18:54:31.833255987Z" level=info msg="shim disconnected" id=daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a Mar 17 18:54:31.833310 env[1450]: time="2025-03-17T18:54:31.833307992Z" level=warning msg="cleaning up after shim disconnected" id=daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a namespace=k8s.io Mar 17 18:54:31.833517 env[1450]: time="2025-03-17T18:54:31.833317633Z" level=info msg="cleaning up dead shim" Mar 17 18:54:31.840582 env[1450]: time="2025-03-17T18:54:31.840525788Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3551 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:54:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:54:31.840972 env[1450]: time="2025-03-17T18:54:31.840836376Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Mar 17 18:54:31.843855 env[1450]: time="2025-03-17T18:54:31.843809518Z" level=error msg="Failed to pipe stdout of container \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\"" error="reading from a closed fifo" Mar 17 18:54:31.844179 env[1450]: time="2025-03-17T18:54:31.843985493Z" level=error msg="Failed to pipe stderr of container \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\"" error="reading from a closed fifo" Mar 17 18:54:31.848471 env[1450]: time="2025-03-17T18:54:31.848394042Z" level=error msg="StartContainer for \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:54:31.849005 kubelet[1933]: E0317 18:54:31.848811 1933 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a" Mar 17 18:54:31.849005 kubelet[1933]: E0317 18:54:31.848966 1933 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 17 18:54:31.849005 kubelet[1933]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:54:31.849005 kubelet[1933]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:54:31.849005 kubelet[1933]: rm /hostbin/cilium-mount Mar 17 18:54:31.849212 kubelet[1933]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5f4nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlxbv_kube-system(ed46be82-2be7-4035-aa3e-cd6a9b80c5b9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:54:31.849212 kubelet[1933]: > logger="UnhandledError" Mar 17 18:54:31.850311 kubelet[1933]: E0317 18:54:31.850266 1933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlxbv" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" Mar 17 18:54:31.942705 env[1450]: time="2025-03-17T18:54:31.942663831Z" level=info msg="CreateContainer within sandbox \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 18:54:31.973760 env[1450]: time="2025-03-17T18:54:31.973684845Z" level=info msg="CreateContainer within sandbox \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\"" Mar 17 18:54:31.974519 env[1450]: time="2025-03-17T18:54:31.974469114Z" level=info msg="StartContainer for \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\"" Mar 17 18:54:31.993595 systemd[1]: Started cri-containerd-e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f.scope. Mar 17 18:54:32.003961 systemd[1]: cri-containerd-e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f.scope: Deactivated successfully. Mar 17 18:54:32.024311 env[1450]: time="2025-03-17T18:54:32.024250374Z" level=info msg="shim disconnected" id=e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f Mar 17 18:54:32.024311 env[1450]: time="2025-03-17T18:54:32.024307859Z" level=warning msg="cleaning up after shim disconnected" id=e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f namespace=k8s.io Mar 17 18:54:32.024311 env[1450]: time="2025-03-17T18:54:32.024317380Z" level=info msg="cleaning up dead shim" Mar 17 18:54:32.031387 env[1450]: time="2025-03-17T18:54:32.031324663Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3590 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:54:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:54:32.031702 env[1450]: time="2025-03-17T18:54:32.031601327Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Mar 17 18:54:32.034025 env[1450]: time="2025-03-17T18:54:32.033981371Z" level=error msg="Failed to pipe stderr of container \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\"" error="reading from a closed fifo" Mar 17 18:54:32.034219 env[1450]: time="2025-03-17T18:54:32.034180669Z" level=error msg="Failed to pipe stdout of container \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\"" error="reading from a closed fifo" Mar 17 18:54:32.038506 env[1450]: time="2025-03-17T18:54:32.038455237Z" level=error msg="StartContainer for \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:54:32.039093 kubelet[1933]: E0317 18:54:32.038911 1933 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f" Mar 17 18:54:32.039093 kubelet[1933]: E0317 18:54:32.039043 1933 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 17 18:54:32.039093 kubelet[1933]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:54:32.039093 kubelet[1933]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:54:32.039093 kubelet[1933]: rm /hostbin/cilium-mount Mar 17 18:54:32.039303 kubelet[1933]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5f4nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlxbv_kube-system(ed46be82-2be7-4035-aa3e-cd6a9b80c5b9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:54:32.039303 kubelet[1933]: > logger="UnhandledError" Mar 17 18:54:32.040520 kubelet[1933]: E0317 18:54:32.040483 1933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlxbv" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" Mar 17 18:54:32.694923 kubelet[1933]: E0317 18:54:32.694878 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:32.916986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439830657.mount: Deactivated successfully. Mar 17 18:54:32.944663 kubelet[1933]: I0317 18:54:32.944389 1933 scope.go:117] "RemoveContainer" containerID="daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a" Mar 17 18:54:32.944827 env[1450]: time="2025-03-17T18:54:32.944678924Z" level=info msg="StopPodSandbox for \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\"" Mar 17 18:54:32.944827 env[1450]: time="2025-03-17T18:54:32.944731929Z" level=info msg="Container to stop \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:32.944827 env[1450]: time="2025-03-17T18:54:32.944764011Z" level=info msg="Container to stop \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:32.946852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4-shm.mount: Deactivated successfully. Mar 17 18:54:32.950825 env[1450]: time="2025-03-17T18:54:32.950784050Z" level=info msg="RemoveContainer for \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\"" Mar 17 18:54:32.954330 systemd[1]: cri-containerd-faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4.scope: Deactivated successfully. Mar 17 18:54:32.978155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4-rootfs.mount: Deactivated successfully. Mar 17 18:54:33.027682 env[1450]: time="2025-03-17T18:54:33.027634453Z" level=info msg="shim disconnected" id=faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4 Mar 17 18:54:33.027960 env[1450]: time="2025-03-17T18:54:33.027937638Z" level=warning msg="cleaning up after shim disconnected" id=faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4 namespace=k8s.io Mar 17 18:54:33.028030 env[1450]: time="2025-03-17T18:54:33.028016685Z" level=info msg="cleaning up dead shim" Mar 17 18:54:33.036041 env[1450]: time="2025-03-17T18:54:33.035996916Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3622 runtime=io.containerd.runc.v2\n" Mar 17 18:54:33.036528 env[1450]: time="2025-03-17T18:54:33.036497318Z" level=info msg="TearDown network for sandbox \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" successfully" Mar 17 18:54:33.036625 env[1450]: time="2025-03-17T18:54:33.036607207Z" level=info msg="StopPodSandbox for \"faa62a8a42d1fdf5200ada7c3f5d5c8108ad2a6ffad0a50d67fefcb9de49d8e4\" returns successfully" Mar 17 18:54:33.043247 env[1450]: time="2025-03-17T18:54:33.043198081Z" level=info msg="RemoveContainer for \"daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a\" returns successfully" Mar 17 18:54:33.197344 kubelet[1933]: I0317 18:54:33.197290 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f4nj\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-kube-api-access-5f4nj\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197344 kubelet[1933]: I0317 18:54:33.197342 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-clustermesh-secrets\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197360 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-net\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197378 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hubble-tls\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197396 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-etc-cni-netd\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197411 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hostproc\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197427 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-xtables-lock\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197589 kubelet[1933]: I0317 18:54:33.197458 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-ipsec-secrets\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197473 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cni-path\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197492 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-run\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197510 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-config-path\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197526 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-kernel\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197540 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-cgroup\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197790 kubelet[1933]: I0317 18:54:33.197557 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-bpf-maps\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.197992 kubelet[1933]: I0317 18:54:33.197571 1933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-lib-modules\") pod \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\" (UID: \"ed46be82-2be7-4035-aa3e-cd6a9b80c5b9\") " Mar 17 18:54:33.198667 kubelet[1933]: I0317 18:54:33.198623 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.199019 kubelet[1933]: I0317 18:54:33.198992 1933 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-lib-modules\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.204726 kubelet[1933]: I0317 18:54:33.203559 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-kube-api-access-5f4nj" (OuterVolumeSpecName: "kube-api-access-5f4nj") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "kube-api-access-5f4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:33.204726 kubelet[1933]: I0317 18:54:33.203627 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.204726 kubelet[1933]: I0317 18:54:33.203650 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.205562 kubelet[1933]: I0317 18:54:33.205519 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:33.205657 kubelet[1933]: I0317 18:54:33.205634 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.205698 kubelet[1933]: I0317 18:54:33.205662 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.205698 kubelet[1933]: I0317 18:54:33.205680 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.207592 systemd[1]: var-lib-kubelet-pods-ed46be82\x2d2be7\x2d4035\x2daa3e\x2dcd6a9b80c5b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5f4nj.mount: Deactivated successfully. Mar 17 18:54:33.207698 systemd[1]: var-lib-kubelet-pods-ed46be82\x2d2be7\x2d4035\x2daa3e\x2dcd6a9b80c5b9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:33.209142 kubelet[1933]: I0317 18:54:33.207851 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.209142 kubelet[1933]: I0317 18:54:33.207903 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.209142 kubelet[1933]: I0317 18:54:33.207922 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.209427 kubelet[1933]: I0317 18:54:33.209279 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.211123 kubelet[1933]: I0317 18:54:33.211085 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:33.214615 systemd[1]: var-lib-kubelet-pods-ed46be82\x2d2be7\x2d4035\x2daa3e\x2dcd6a9b80c5b9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:33.216083 kubelet[1933]: I0317 18:54:33.216037 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:33.218962 kubelet[1933]: I0317 18:54:33.218926 1933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" (UID: "ed46be82-2be7-4035-aa3e-cd6a9b80c5b9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:33.219899 systemd[1]: var-lib-kubelet-pods-ed46be82\x2d2be7\x2d4035\x2daa3e\x2dcd6a9b80c5b9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300038 1933 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-xtables-lock\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300074 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-ipsec-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300084 1933 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-etc-cni-netd\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300093 1933 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hostproc\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300103 1933 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cni-path\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300110 1933 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-kernel\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300118 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-cgroup\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300204 kubelet[1933]: I0317 18:54:33.300125 1933 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-bpf-maps\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300134 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-run\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300142 1933 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-cilium-config-path\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300150 1933 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-hubble-tls\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300158 1933 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5f4nj\" (UniqueName: \"kubernetes.io/projected/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-kube-api-access-5f4nj\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300165 1933 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-clustermesh-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.300521 kubelet[1933]: I0317 18:54:33.300174 1933 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9-host-proc-sys-net\") on node \"10.200.20.24\" DevicePath \"\"" Mar 17 18:54:33.488155 kubelet[1933]: I0317 18:54:33.487111 1933 setters.go:600] "Node became not ready" node="10.200.20.24" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:54:33Z","lastTransitionTime":"2025-03-17T18:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:54:33.608921 env[1450]: time="2025-03-17T18:54:33.608857482Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:33.623153 env[1450]: time="2025-03-17T18:54:33.623105360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:33.628033 env[1450]: time="2025-03-17T18:54:33.627980570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:54:33.628588 env[1450]: time="2025-03-17T18:54:33.628555738Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:54:33.631212 env[1450]: time="2025-03-17T18:54:33.631169398Z" level=info msg="CreateContainer within sandbox \"1b4a62fd14772fcbc5ea6c20fe64cd3dab45529b885e093cadfb0424fba7fb6d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:54:33.669120 env[1450]: time="2025-03-17T18:54:33.669068385Z" level=info msg="CreateContainer within sandbox \"1b4a62fd14772fcbc5ea6c20fe64cd3dab45529b885e093cadfb0424fba7fb6d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3be8b289dbf058c06de5f4f98180aba0c3e0abbe728a90337135ae9d795789e9\"" Mar 17 18:54:33.670153 env[1450]: time="2025-03-17T18:54:33.670114112Z" level=info msg="StartContainer for \"3be8b289dbf058c06de5f4f98180aba0c3e0abbe728a90337135ae9d795789e9\"" Mar 17 18:54:33.685375 systemd[1]: Started cri-containerd-3be8b289dbf058c06de5f4f98180aba0c3e0abbe728a90337135ae9d795789e9.scope. Mar 17 18:54:33.695274 kubelet[1933]: E0317 18:54:33.695196 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:33.717857 env[1450]: time="2025-03-17T18:54:33.717789121Z" level=info msg="StartContainer for \"3be8b289dbf058c06de5f4f98180aba0c3e0abbe728a90337135ae9d795789e9\" returns successfully" Mar 17 18:54:33.759571 systemd[1]: Removed slice kubepods-burstable-poded46be82_2be7_4035_aa3e_cd6a9b80c5b9.slice. Mar 17 18:54:33.947520 kubelet[1933]: I0317 18:54:33.947492 1933 scope.go:117] "RemoveContainer" containerID="e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f" Mar 17 18:54:33.957006 env[1450]: time="2025-03-17T18:54:33.956953350Z" level=info msg="RemoveContainer for \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\"" Mar 17 18:54:33.975060 env[1450]: time="2025-03-17T18:54:33.975008188Z" level=info msg="RemoveContainer for \"e53988219ced0229ffe86120e9ab33d5642e78921050d1bade245bb5e723e30f\" returns successfully" Mar 17 18:54:34.006726 kubelet[1933]: E0317 18:54:34.006676 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" containerName="mount-cgroup" Mar 17 18:54:34.006726 kubelet[1933]: E0317 18:54:34.006720 1933 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" containerName="mount-cgroup" Mar 17 18:54:34.006908 kubelet[1933]: I0317 18:54:34.006765 1933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" containerName="mount-cgroup" Mar 17 18:54:34.006908 kubelet[1933]: I0317 18:54:34.006796 1933 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" containerName="mount-cgroup" Mar 17 18:54:34.012125 systemd[1]: Created slice kubepods-burstable-podd245c009_afe3_455e_abf4_9f6feb566282.slice. Mar 17 18:54:34.049478 kubelet[1933]: I0317 18:54:34.049415 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-62j7b" podStartSLOduration=1.7944782529999999 podStartE2EDuration="4.049396752s" podCreationTimestamp="2025-03-17 18:54:30 +0000 UTC" firstStartedPulling="2025-03-17 18:54:31.374822419 +0000 UTC m=+71.178822529" lastFinishedPulling="2025-03-17 18:54:33.629740878 +0000 UTC m=+73.433741028" observedRunningTime="2025-03-17 18:54:34.028331302 +0000 UTC m=+73.832331452" watchObservedRunningTime="2025-03-17 18:54:34.049396752 +0000 UTC m=+73.853396902" Mar 17 18:54:34.103667 kubelet[1933]: I0317 18:54:34.103625 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-cilium-run\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103667 kubelet[1933]: I0317 18:54:34.103665 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-bpf-maps\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103686 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-hostproc\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103704 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-host-proc-sys-kernel\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103722 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-lib-modules\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103736 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d245c009-afe3-455e-abf4-9f6feb566282-clustermesh-secrets\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103762 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-host-proc-sys-net\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.103864 kubelet[1933]: I0317 18:54:34.103779 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-cni-path\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104020 kubelet[1933]: I0317 18:54:34.103793 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-etc-cni-netd\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104020 kubelet[1933]: I0317 18:54:34.103809 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92st\" (UniqueName: \"kubernetes.io/projected/d245c009-afe3-455e-abf4-9f6feb566282-kube-api-access-b92st\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104020 kubelet[1933]: I0317 18:54:34.103827 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-cilium-cgroup\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104020 kubelet[1933]: I0317 18:54:34.103842 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d245c009-afe3-455e-abf4-9f6feb566282-cilium-config-path\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104020 kubelet[1933]: I0317 18:54:34.103859 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d245c009-afe3-455e-abf4-9f6feb566282-cilium-ipsec-secrets\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104135 kubelet[1933]: I0317 18:54:34.103875 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d245c009-afe3-455e-abf4-9f6feb566282-hubble-tls\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.104135 kubelet[1933]: I0317 18:54:34.103904 1933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d245c009-afe3-455e-abf4-9f6feb566282-xtables-lock\") pod \"cilium-vnn86\" (UID: \"d245c009-afe3-455e-abf4-9f6feb566282\") " pod="kube-system/cilium-vnn86" Mar 17 18:54:34.203814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418680734.mount: Deactivated successfully. Mar 17 18:54:34.317797 env[1450]: time="2025-03-17T18:54:34.317592023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnn86,Uid:d245c009-afe3-455e-abf4-9f6feb566282,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:34.346563 env[1450]: time="2025-03-17T18:54:34.346482396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:34.346723 env[1450]: time="2025-03-17T18:54:34.346568003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:34.346723 env[1450]: time="2025-03-17T18:54:34.346594525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:34.346951 env[1450]: time="2025-03-17T18:54:34.346886389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c pid=3685 runtime=io.containerd.runc.v2 Mar 17 18:54:34.357992 systemd[1]: Started cri-containerd-e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c.scope. Mar 17 18:54:34.381539 env[1450]: time="2025-03-17T18:54:34.381490632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnn86,Uid:d245c009-afe3-455e-abf4-9f6feb566282,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\"" Mar 17 18:54:34.385248 env[1450]: time="2025-03-17T18:54:34.385202937Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:54:34.430132 env[1450]: time="2025-03-17T18:54:34.430073782Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc\"" Mar 17 18:54:34.431107 env[1450]: time="2025-03-17T18:54:34.431075305Z" level=info msg="StartContainer for \"90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc\"" Mar 17 18:54:34.445963 systemd[1]: Started cri-containerd-90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc.scope. Mar 17 18:54:34.474543 env[1450]: time="2025-03-17T18:54:34.474411385Z" level=info msg="StartContainer for \"90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc\" returns successfully" Mar 17 18:54:34.479556 systemd[1]: cri-containerd-90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc.scope: Deactivated successfully. Mar 17 18:54:34.765580 kubelet[1933]: E0317 18:54:34.695460 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:34.841008 env[1450]: time="2025-03-17T18:54:34.840944133Z" level=info msg="shim disconnected" id=90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc Mar 17 18:54:34.841008 env[1450]: time="2025-03-17T18:54:34.841005178Z" level=warning msg="cleaning up after shim disconnected" id=90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc namespace=k8s.io Mar 17 18:54:34.841008 env[1450]: time="2025-03-17T18:54:34.841015979Z" level=info msg="cleaning up dead shim" Mar 17 18:54:34.847862 env[1450]: time="2025-03-17T18:54:34.847813337Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3766 runtime=io.containerd.runc.v2\n" Mar 17 18:54:34.938784 kubelet[1933]: W0317 18:54:34.938652 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded46be82_2be7_4035_aa3e_cd6a9b80c5b9.slice/cri-containerd-daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a.scope WatchSource:0}: container "daa24db21efa990310f951d4de501262e4896888630ee9b2e45c937d48d2406a" in namespace "k8s.io": not found Mar 17 18:54:34.964582 env[1450]: time="2025-03-17T18:54:34.964524084Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:54:35.008924 env[1450]: time="2025-03-17T18:54:35.008867391Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23\"" Mar 17 18:54:35.009873 env[1450]: time="2025-03-17T18:54:35.009824668Z" level=info msg="StartContainer for \"42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23\"" Mar 17 18:54:35.026063 systemd[1]: Started cri-containerd-42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23.scope. Mar 17 18:54:35.068149 env[1450]: time="2025-03-17T18:54:35.068097345Z" level=info msg="StartContainer for \"42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23\" returns successfully" Mar 17 18:54:35.069487 systemd[1]: cri-containerd-42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23.scope: Deactivated successfully. Mar 17 18:54:35.114608 env[1450]: time="2025-03-17T18:54:35.114560235Z" level=info msg="shim disconnected" id=42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23 Mar 17 18:54:35.114876 env[1450]: time="2025-03-17T18:54:35.114855339Z" level=warning msg="cleaning up after shim disconnected" id=42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23 namespace=k8s.io Mar 17 18:54:35.114950 env[1450]: time="2025-03-17T18:54:35.114937065Z" level=info msg="cleaning up dead shim" Mar 17 18:54:35.122294 env[1450]: time="2025-03-17T18:54:35.122252652Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Mar 17 18:54:35.587905 waagent[1640]: 2025-03-17T18:54:35.587812Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:54:35.595453 waagent[1640]: 2025-03-17T18:54:35.595377Z INFO ExtHandler Mar 17 18:54:35.595608 waagent[1640]: 2025-03-17T18:54:35.595560Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:54:35.675411 waagent[1640]: 2025-03-17T18:54:35.675345Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:54:35.696609 kubelet[1933]: E0317 18:54:35.696567 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:35.757355 kubelet[1933]: I0317 18:54:35.757299 1933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed46be82-2be7-4035-aa3e-cd6a9b80c5b9" path="/var/lib/kubelet/pods/ed46be82-2be7-4035-aa3e-cd6a9b80c5b9/volumes" Mar 17 18:54:35.828881 waagent[1640]: 2025-03-17T18:54:35.828710Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F8B41FD1746D3A8C4057AEF131B50485E71CE21', 'hasPrivateKey': False} Mar 17 18:54:35.830030 waagent[1640]: 2025-03-17T18:54:35.829964Z INFO ExtHandler Downloaded certificate {'thumbprint': '43BFE01D6F67BBDFA6E5BA3A44BA085A2500D5E6', 'hasPrivateKey': True} Mar 17 18:54:35.831264 waagent[1640]: 2025-03-17T18:54:35.831197Z INFO ExtHandler Fetch goal state completed Mar 17 18:54:35.832324 waagent[1640]: 2025-03-17T18:54:35.832259Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:54:35.833693 waagent[1640]: 2025-03-17T18:54:35.833634Z INFO ExtHandler ExtHandler Mar 17 18:54:35.833860 waagent[1640]: 2025-03-17T18:54:35.833808Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 55d0caef-181c-4ba8-a929-53f80b3baa46 correlation 21626d20-f279-413a-bb6b-a2b991b3b7ce created: 2025-03-17T18:54:26.036903Z] Mar 17 18:54:35.834654 waagent[1640]: 2025-03-17T18:54:35.834593Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:54:35.836630 waagent[1640]: 2025-03-17T18:54:35.836574Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Mar 17 18:54:35.970250 env[1450]: time="2025-03-17T18:54:35.970198714Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:54:36.000986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915586172.mount: Deactivated successfully. Mar 17 18:54:36.013511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383726767.mount: Deactivated successfully. Mar 17 18:54:36.035877 env[1450]: time="2025-03-17T18:54:36.035826158Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f\"" Mar 17 18:54:36.036852 env[1450]: time="2025-03-17T18:54:36.036819596Z" level=info msg="StartContainer for \"969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f\"" Mar 17 18:54:36.052551 systemd[1]: Started cri-containerd-969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f.scope. Mar 17 18:54:36.081050 systemd[1]: cri-containerd-969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f.scope: Deactivated successfully. Mar 17 18:54:36.086606 env[1450]: time="2025-03-17T18:54:36.086547337Z" level=info msg="StartContainer for \"969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f\" returns successfully" Mar 17 18:54:36.127465 env[1450]: time="2025-03-17T18:54:36.127410783Z" level=info msg="shim disconnected" id=969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f Mar 17 18:54:36.127465 env[1450]: time="2025-03-17T18:54:36.127461987Z" level=warning msg="cleaning up after shim disconnected" id=969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f namespace=k8s.io Mar 17 18:54:36.127465 env[1450]: time="2025-03-17T18:54:36.127472388Z" level=info msg="cleaning up dead shim" Mar 17 18:54:36.134134 env[1450]: time="2025-03-17T18:54:36.134083906Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" Mar 17 18:54:36.697323 kubelet[1933]: E0317 18:54:36.697265 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:36.767642 kubelet[1933]: E0317 18:54:36.767574 1933 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:36.974653 env[1450]: time="2025-03-17T18:54:36.974505037Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:54:37.015558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185887480.mount: Deactivated successfully. Mar 17 18:54:37.022326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134273456.mount: Deactivated successfully. Mar 17 18:54:37.041474 env[1450]: time="2025-03-17T18:54:37.041401453Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6\"" Mar 17 18:54:37.042359 env[1450]: time="2025-03-17T18:54:37.042326484Z" level=info msg="StartContainer for \"3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6\"" Mar 17 18:54:37.057149 systemd[1]: Started cri-containerd-3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6.scope. Mar 17 18:54:37.083446 systemd[1]: cri-containerd-3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6.scope: Deactivated successfully. Mar 17 18:54:37.087179 env[1450]: time="2025-03-17T18:54:37.087131120Z" level=info msg="StartContainer for \"3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6\" returns successfully" Mar 17 18:54:37.115398 env[1450]: time="2025-03-17T18:54:37.115351084Z" level=info msg="shim disconnected" id=3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6 Mar 17 18:54:37.115671 env[1450]: time="2025-03-17T18:54:37.115651147Z" level=warning msg="cleaning up after shim disconnected" id=3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6 namespace=k8s.io Mar 17 18:54:37.115739 env[1450]: time="2025-03-17T18:54:37.115726593Z" level=info msg="cleaning up dead shim" Mar 17 18:54:37.122839 env[1450]: time="2025-03-17T18:54:37.122798895Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Mar 17 18:54:37.697393 kubelet[1933]: E0317 18:54:37.697358 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:37.978094 env[1450]: time="2025-03-17T18:54:37.977992998Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:54:38.004005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092383994.mount: Deactivated successfully. Mar 17 18:54:38.010535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078975201.mount: Deactivated successfully. Mar 17 18:54:38.024596 env[1450]: time="2025-03-17T18:54:38.024534047Z" level=info msg="CreateContainer within sandbox \"e99cb93a65bb0de5ced2d1c8f4ac66a9bc4cac55b1ff33249cbc99590803f53c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f\"" Mar 17 18:54:38.025154 env[1450]: time="2025-03-17T18:54:38.025084088Z" level=info msg="StartContainer for \"41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f\"" Mar 17 18:54:38.039258 systemd[1]: Started cri-containerd-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f.scope. Mar 17 18:54:38.058788 kubelet[1933]: W0317 18:54:38.057991 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd245c009_afe3_455e_abf4_9f6feb566282.slice/cri-containerd-90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc.scope WatchSource:0}: task 90d8e7996223561774579bc7fb6d7836170c3c062b33c48dbe844bb9324ae1fc not found: not found Mar 17 18:54:38.072100 env[1450]: time="2025-03-17T18:54:38.072051930Z" level=info msg="StartContainer for \"41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f\" returns successfully" Mar 17 18:54:38.390774 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:54:38.698265 kubelet[1933]: E0317 18:54:38.698218 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:38.998550 kubelet[1933]: I0317 18:54:38.998210 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vnn86" podStartSLOduration=5.998183053 podStartE2EDuration="5.998183053s" podCreationTimestamp="2025-03-17 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:54:38.997929834 +0000 UTC m=+78.801929984" watchObservedRunningTime="2025-03-17 18:54:38.998183053 +0000 UTC m=+78.802183203" Mar 17 18:54:39.081489 systemd[1]: run-containerd-runc-k8s.io-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f-runc.l1Ny99.mount: Deactivated successfully. Mar 17 18:54:39.699173 kubelet[1933]: E0317 18:54:39.699124 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:40.699654 kubelet[1933]: E0317 18:54:40.699608 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:41.120551 systemd-networkd[1603]: lxc_health: Link UP Mar 17 18:54:41.146123 systemd-networkd[1603]: lxc_health: Gained carrier Mar 17 18:54:41.146898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:54:41.168282 kubelet[1933]: W0317 18:54:41.168216 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd245c009_afe3_455e_abf4_9f6feb566282.slice/cri-containerd-42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23.scope WatchSource:0}: task 42c4f6cb216c1e52a275c913d292db2ac44614d00ace3dc5f8a6360d8d6e3c23 not found: not found Mar 17 18:54:41.212829 systemd[1]: run-containerd-runc-k8s.io-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f-runc.yP5wek.mount: Deactivated successfully. Mar 17 18:54:41.639053 kubelet[1933]: E0317 18:54:41.638998 1933 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:41.699816 kubelet[1933]: E0317 18:54:41.699771 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:41.877839 waagent[1640]: 2025-03-17T18:54:41.877728Z INFO ExtHandler Mar 17 18:54:41.878457 waagent[1640]: 2025-03-17T18:54:41.878395Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 747fb688-90b3-420f-b071-2b07e2934b78 eTag: 1922516891493038528 source: Fabric] Mar 17 18:54:41.879492 waagent[1640]: 2025-03-17T18:54:41.879422Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:54:42.700686 kubelet[1933]: E0317 18:54:42.700628 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:43.022941 systemd-networkd[1603]: lxc_health: Gained IPv6LL Mar 17 18:54:43.451217 systemd[1]: run-containerd-runc-k8s.io-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f-runc.NoRRAE.mount: Deactivated successfully. Mar 17 18:54:43.701687 kubelet[1933]: E0317 18:54:43.701632 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:44.276108 kubelet[1933]: W0317 18:54:44.276069 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd245c009_afe3_455e_abf4_9f6feb566282.slice/cri-containerd-969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f.scope WatchSource:0}: task 969f0b7c30889b86a2882a864d9ceedd8879de47be9fe4ff33a128b3cc9d099f not found: not found Mar 17 18:54:44.702404 kubelet[1933]: E0317 18:54:44.702368 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:45.602798 systemd[1]: run-containerd-runc-k8s.io-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f-runc.awtYaf.mount: Deactivated successfully. Mar 17 18:54:45.703445 kubelet[1933]: E0317 18:54:45.703364 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:46.704137 kubelet[1933]: E0317 18:54:46.704095 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:47.383595 kubelet[1933]: W0317 18:54:47.383555 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd245c009_afe3_455e_abf4_9f6feb566282.slice/cri-containerd-3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6.scope WatchSource:0}: task 3071045c78c37018df5fd05be4e5f5fea72dc504cff2282de3e046cb014c0ef6 not found: not found Mar 17 18:54:47.707386 kubelet[1933]: E0317 18:54:47.707350 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:47.738207 systemd[1]: run-containerd-runc-k8s.io-41ab2e5dbd9e0a5130c8c013233ce5a19f3c56a985b14213b3689ea73c22f79f-runc.qITG0o.mount: Deactivated successfully. Mar 17 18:54:48.708493 kubelet[1933]: E0317 18:54:48.708433 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:49.709285 kubelet[1933]: E0317 18:54:49.709241 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:50.710157 kubelet[1933]: E0317 18:54:50.710115 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:51.710271 kubelet[1933]: E0317 18:54:51.710225 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:54:52.711097 kubelet[1933]: E0317 18:54:52.711057 1933 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"