Dec 13 14:09:06.028526 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:09:06.028543 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:09:06.028551 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 14:09:06.028558 kernel: printk: bootconsole [pl11] enabled Dec 13 14:09:06.028563 kernel: efi: EFI v2.70 by EDK II Dec 13 14:09:06.028568 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Dec 13 14:09:06.028575 kernel: random: crng init done Dec 13 14:09:06.028580 kernel: ACPI: Early table checksum verification disabled Dec 13 14:09:06.028586 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 14:09:06.028591 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028597 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028602 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 14:09:06.028609 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028614 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028628 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028634 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028641 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028648 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028654 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 14:09:06.028660 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:09:06.028665 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 14:09:06.028671 kernel: NUMA: Failed to initialise from firmware Dec 13 14:09:06.028677 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:09:06.028683 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Dec 13 14:09:06.028688 kernel: Zone ranges: Dec 13 14:09:06.028694 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 14:09:06.028699 kernel: DMA32 empty Dec 13 14:09:06.028705 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:09:06.028712 kernel: Movable zone start for each node Dec 13 14:09:06.028717 kernel: Early memory node ranges Dec 13 14:09:06.028723 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 14:09:06.028729 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 14:09:06.028734 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 14:09:06.028740 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 14:09:06.028745 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 14:09:06.028751 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 14:09:06.028757 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:09:06.028763 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:09:06.028768 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 14:09:06.028774 kernel: psci: probing for conduit method from ACPI. Dec 13 14:09:06.028784 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:09:06.028790 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:09:06.028796 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:09:06.028802 kernel: psci: SMC Calling Convention v1.4 Dec 13 14:09:06.028808 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Dec 13 14:09:06.028815 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Dec 13 14:09:06.028821 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:09:06.028827 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:09:06.028833 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:09:06.031859 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:09:06.031877 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:09:06.031884 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:09:06.031890 kernel: CPU features: detected: Spectre-BHB Dec 13 14:09:06.031897 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:09:06.031903 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:09:06.031909 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:09:06.031919 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 14:09:06.031925 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:09:06.031931 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 14:09:06.031938 kernel: Policy zone: Normal Dec 13 14:09:06.031945 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:09:06.031952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:09:06.031958 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:09:06.031965 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:09:06.031971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:09:06.031977 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Dec 13 14:09:06.031984 kernel: Memory: 3986940K/4194160K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 207220K reserved, 0K cma-reserved) Dec 13 14:09:06.031991 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:09:06.031997 kernel: trace event string verifier disabled Dec 13 14:09:06.032003 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:09:06.032010 kernel: rcu: RCU event tracing is enabled. Dec 13 14:09:06.032016 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:09:06.032022 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:09:06.032028 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:09:06.032035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:09:06.032041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:09:06.032047 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:09:06.032053 kernel: GICv3: 960 SPIs implemented Dec 13 14:09:06.032061 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:09:06.032067 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:09:06.032073 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:09:06.032079 kernel: GICv3: 16 PPIs implemented Dec 13 14:09:06.032085 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 14:09:06.032091 kernel: ITS: No ITS available, not enabling LPIs Dec 13 14:09:06.032098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:09:06.032104 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:09:06.032110 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:09:06.032116 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:09:06.032123 kernel: Console: colour dummy device 80x25 Dec 13 14:09:06.032130 kernel: printk: console [tty1] enabled Dec 13 14:09:06.032137 kernel: ACPI: Core revision 20210730 Dec 13 14:09:06.032143 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:09:06.032150 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:09:06.032156 kernel: LSM: Security Framework initializing Dec 13 14:09:06.032162 kernel: SELinux: Initializing. Dec 13 14:09:06.032168 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:09:06.032175 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:09:06.032182 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 14:09:06.032189 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 14:09:06.032195 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:09:06.032201 kernel: Remapping and enabling EFI services. Dec 13 14:09:06.032208 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:09:06.032214 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:09:06.032220 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 14:09:06.032226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:09:06.032233 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:09:06.032239 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:09:06.032245 kernel: SMP: Total of 2 processors activated. Dec 13 14:09:06.032253 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:09:06.032259 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 14:09:06.032266 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:09:06.032272 kernel: CPU features: detected: CRC32 instructions Dec 13 14:09:06.032278 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:09:06.032285 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:09:06.032291 kernel: CPU features: detected: Privileged Access Never Dec 13 14:09:06.032297 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:09:06.032303 kernel: alternatives: patching kernel code Dec 13 14:09:06.032311 kernel: devtmpfs: initialized Dec 13 14:09:06.032321 kernel: KASLR enabled Dec 13 14:09:06.032328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:09:06.032336 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:09:06.032342 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:09:06.032349 kernel: SMBIOS 3.1.0 present. Dec 13 14:09:06.032355 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 14:09:06.032362 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:09:06.032369 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:09:06.032377 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:09:06.032383 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:09:06.032390 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:09:06.032396 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Dec 13 14:09:06.032403 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:09:06.032410 kernel: cpuidle: using governor menu Dec 13 14:09:06.032416 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:09:06.032424 kernel: ASID allocator initialised with 32768 entries Dec 13 14:09:06.032431 kernel: ACPI: bus type PCI registered Dec 13 14:09:06.032438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:09:06.032444 kernel: Serial: AMBA PL011 UART driver Dec 13 14:09:06.032451 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:09:06.032458 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:09:06.032464 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:09:06.032471 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:09:06.032477 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:09:06.032485 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:09:06.032492 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:09:06.032498 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:09:06.032505 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:09:06.032512 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:09:06.032518 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:09:06.032525 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:09:06.032532 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:09:06.032538 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:09:06.032546 kernel: ACPI: Interpreter enabled Dec 13 14:09:06.032552 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:09:06.032559 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:09:06.032566 kernel: printk: console [ttyAMA0] enabled Dec 13 14:09:06.032573 kernel: printk: bootconsole [pl11] disabled Dec 13 14:09:06.032579 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 14:09:06.032586 kernel: iommu: Default domain type: Translated Dec 13 14:09:06.032593 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:09:06.032599 kernel: vgaarb: loaded Dec 13 14:09:06.032606 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:09:06.032614 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:09:06.032620 kernel: PTP clock support registered Dec 13 14:09:06.032627 kernel: Registered efivars operations Dec 13 14:09:06.032634 kernel: No ACPI PMU IRQ for CPU0 Dec 13 14:09:06.032640 kernel: No ACPI PMU IRQ for CPU1 Dec 13 14:09:06.032647 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:09:06.032653 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:09:06.032660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:09:06.032667 kernel: pnp: PnP ACPI init Dec 13 14:09:06.032674 kernel: pnp: PnP ACPI: found 0 devices Dec 13 14:09:06.032680 kernel: NET: Registered PF_INET protocol family Dec 13 14:09:06.032687 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:09:06.032694 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:09:06.032701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:09:06.032707 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:09:06.032714 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:09:06.032721 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:09:06.032729 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:09:06.032735 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:09:06.032742 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:09:06.032749 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:09:06.032756 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 14:09:06.032762 kernel: kvm [1]: HYP mode not available Dec 13 14:09:06.032769 kernel: Initialise system trusted keyrings Dec 13 14:09:06.032775 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:09:06.032782 kernel: Key type asymmetric registered Dec 13 14:09:06.032789 kernel: Asymmetric key parser 'x509' registered Dec 13 14:09:06.032796 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:09:06.032803 kernel: io scheduler mq-deadline registered Dec 13 14:09:06.032809 kernel: io scheduler kyber registered Dec 13 14:09:06.032816 kernel: io scheduler bfq registered Dec 13 14:09:06.032822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:09:06.032829 kernel: thunder_xcv, ver 1.0 Dec 13 14:09:06.032835 kernel: thunder_bgx, ver 1.0 Dec 13 14:09:06.032856 kernel: nicpf, ver 1.0 Dec 13 14:09:06.032863 kernel: nicvf, ver 1.0 Dec 13 14:09:06.032982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:09:06.033042 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:09:05 UTC (1734098945) Dec 13 14:09:06.033051 kernel: efifb: probing for efifb Dec 13 14:09:06.033058 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:09:06.033065 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:09:06.033072 kernel: efifb: scrolling: redraw Dec 13 14:09:06.033079 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:09:06.033087 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:09:06.033094 kernel: fb0: EFI VGA frame buffer device Dec 13 14:09:06.033100 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 14:09:06.033107 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:09:06.033114 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:09:06.033120 kernel: Segment Routing with IPv6 Dec 13 14:09:06.033127 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:09:06.033133 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:09:06.033140 kernel: Key type dns_resolver registered Dec 13 14:09:06.033146 kernel: registered taskstats version 1 Dec 13 14:09:06.033154 kernel: Loading compiled-in X.509 certificates Dec 13 14:09:06.033161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:09:06.033167 kernel: Key type .fscrypt registered Dec 13 14:09:06.033174 kernel: Key type fscrypt-provisioning registered Dec 13 14:09:06.033181 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:09:06.033187 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:09:06.033194 kernel: ima: No architecture policies found Dec 13 14:09:06.033200 kernel: clk: Disabling unused clocks Dec 13 14:09:06.033208 kernel: Freeing unused kernel memory: 36416K Dec 13 14:09:06.033215 kernel: Run /init as init process Dec 13 14:09:06.033221 kernel: with arguments: Dec 13 14:09:06.033228 kernel: /init Dec 13 14:09:06.033234 kernel: with environment: Dec 13 14:09:06.033240 kernel: HOME=/ Dec 13 14:09:06.033247 kernel: TERM=linux Dec 13 14:09:06.033253 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:09:06.033262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:09:06.033272 systemd[1]: Detected virtualization microsoft. Dec 13 14:09:06.033280 systemd[1]: Detected architecture arm64. Dec 13 14:09:06.033286 systemd[1]: Running in initrd. Dec 13 14:09:06.033293 systemd[1]: No hostname configured, using default hostname. Dec 13 14:09:06.033300 systemd[1]: Hostname set to . Dec 13 14:09:06.033307 systemd[1]: Initializing machine ID from random generator. Dec 13 14:09:06.033314 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:09:06.033322 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:09:06.033330 systemd[1]: Reached target cryptsetup.target. Dec 13 14:09:06.033336 systemd[1]: Reached target paths.target. Dec 13 14:09:06.033344 systemd[1]: Reached target slices.target. Dec 13 14:09:06.033351 systemd[1]: Reached target swap.target. Dec 13 14:09:06.033358 systemd[1]: Reached target timers.target. Dec 13 14:09:06.033365 systemd[1]: Listening on iscsid.socket. Dec 13 14:09:06.033373 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:09:06.033381 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:09:06.033388 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:09:06.033395 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:09:06.033402 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:09:06.033409 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:09:06.033417 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:09:06.033424 systemd[1]: Reached target sockets.target. Dec 13 14:09:06.033431 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:09:06.033438 systemd[1]: Finished network-cleanup.service. Dec 13 14:09:06.033446 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:09:06.033453 systemd[1]: Starting systemd-journald.service... Dec 13 14:09:06.033460 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:09:06.033467 systemd[1]: Starting systemd-resolved.service... Dec 13 14:09:06.033474 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:09:06.033484 systemd-journald[276]: Journal started Dec 13 14:09:06.033522 systemd-journald[276]: Runtime Journal (/run/log/journal/3de1a52d88ac43fb9ff7fe9ace9fb46e) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:09:06.014940 systemd-modules-load[277]: Inserted module 'overlay' Dec 13 14:09:06.055638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:09:06.065053 systemd-modules-load[277]: Inserted module 'br_netfilter' Dec 13 14:09:06.074170 kernel: Bridge firewalling registered Dec 13 14:09:06.074188 systemd[1]: Started systemd-journald.service. Dec 13 14:09:06.074011 systemd-resolved[278]: Positive Trust Anchors: Dec 13 14:09:06.074018 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:09:06.125498 kernel: audit: type=1130 audit(1734098946.094:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.125523 kernel: SCSI subsystem initialized Dec 13 14:09:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.074050 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:09:06.186311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:09:06.186333 kernel: audit: type=1130 audit(1734098946.156:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.186342 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:09:06.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.076070 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 14:09:06.095777 systemd[1]: Started systemd-resolved.service. Dec 13 14:09:06.214950 kernel: audit: type=1130 audit(1734098946.195:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.156816 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:09:06.228333 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:09:06.195452 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:09:06.267924 kernel: audit: type=1130 audit(1734098946.228:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.267947 kernel: audit: type=1130 audit(1734098946.251:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.228352 systemd-modules-load[277]: Inserted module 'dm_multipath' Dec 13 14:09:06.297958 kernel: audit: type=1130 audit(1734098946.272:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.229270 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:09:06.251828 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:09:06.273019 systemd[1]: Reached target nss-lookup.target. Dec 13 14:09:06.298424 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:09:06.307966 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:09:06.323245 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:09:06.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.330110 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:09:06.394363 kernel: audit: type=1130 audit(1734098946.344:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.394387 kernel: audit: type=1130 audit(1734098946.372:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.365463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:09:06.395532 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:09:06.427369 kernel: audit: type=1130 audit(1734098946.405:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.430570 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:09:06.446128 dracut-cmdline[298]: dracut-dracut-053 Dec 13 14:09:06.451090 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:09:06.542865 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:09:06.557860 kernel: iscsi: registered transport (tcp) Dec 13 14:09:06.579473 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:09:06.579522 kernel: QLogic iSCSI HBA Driver Dec 13 14:09:06.614276 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:09:06.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:06.620180 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:09:06.673860 kernel: raid6: neonx8 gen() 13769 MB/s Dec 13 14:09:06.694862 kernel: raid6: neonx8 xor() 10828 MB/s Dec 13 14:09:06.714850 kernel: raid6: neonx4 gen() 13562 MB/s Dec 13 14:09:06.736851 kernel: raid6: neonx4 xor() 11303 MB/s Dec 13 14:09:06.757860 kernel: raid6: neonx2 gen() 12963 MB/s Dec 13 14:09:06.777851 kernel: raid6: neonx2 xor() 10411 MB/s Dec 13 14:09:06.798644 kernel: raid6: neonx1 gen() 10539 MB/s Dec 13 14:09:06.817853 kernel: raid6: neonx1 xor() 8373 MB/s Dec 13 14:09:06.837852 kernel: raid6: int64x8 gen() 6269 MB/s Dec 13 14:09:06.858851 kernel: raid6: int64x8 xor() 3543 MB/s Dec 13 14:09:06.878855 kernel: raid6: int64x4 gen() 7236 MB/s Dec 13 14:09:06.898854 kernel: raid6: int64x4 xor() 3858 MB/s Dec 13 14:09:06.919855 kernel: raid6: int64x2 gen() 6153 MB/s Dec 13 14:09:06.939850 kernel: raid6: int64x2 xor() 3321 MB/s Dec 13 14:09:06.959850 kernel: raid6: int64x1 gen() 5047 MB/s Dec 13 14:09:06.985142 kernel: raid6: int64x1 xor() 2647 MB/s Dec 13 14:09:06.985163 kernel: raid6: using algorithm neonx8 gen() 13769 MB/s Dec 13 14:09:06.985179 kernel: raid6: .... xor() 10828 MB/s, rmw enabled Dec 13 14:09:06.989458 kernel: raid6: using neon recovery algorithm Dec 13 14:09:07.010022 kernel: xor: measuring software checksum speed Dec 13 14:09:07.010046 kernel: 8regs : 17220 MB/sec Dec 13 14:09:07.013894 kernel: 32regs : 20676 MB/sec Dec 13 14:09:07.017568 kernel: arm64_neon : 27993 MB/sec Dec 13 14:09:07.017577 kernel: xor: using function: arm64_neon (27993 MB/sec) Dec 13 14:09:07.077857 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:09:07.086443 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:09:07.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:07.094000 audit: BPF prog-id=7 op=LOAD Dec 13 14:09:07.094000 audit: BPF prog-id=8 op=LOAD Dec 13 14:09:07.095433 systemd[1]: Starting systemd-udevd.service... Dec 13 14:09:07.113310 systemd-udevd[475]: Using default interface naming scheme 'v252'. Dec 13 14:09:07.119946 systemd[1]: Started systemd-udevd.service. Dec 13 14:09:07.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:07.129746 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:09:07.142151 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Dec 13 14:09:07.168055 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:09:07.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:07.173420 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:09:07.211978 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:09:07.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:07.274873 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 14:09:07.298522 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:09:07.298565 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:09:07.298575 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 14:09:07.307818 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:09:07.307979 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:09:07.316853 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:09:07.317860 kernel: scsi host0: storvsc_host_t Dec 13 14:09:07.323370 kernel: scsi host1: storvsc_host_t Dec 13 14:09:07.323417 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 14:09:07.348572 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:09:07.348640 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:09:07.372856 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:09:07.413738 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:09:07.413754 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:09:07.413876 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:09:07.413978 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:09:07.414075 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:09:07.414156 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:09:07.414236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:09:07.414248 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:09:07.414326 kernel: hv_netvsc 0022487a-06ce-0022-487a-06ce0022487a eth0: VF slot 1 added Dec 13 14:09:07.414406 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:09:07.414486 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:09:07.427141 kernel: hv_pci 2e5d131a-d5d6-4065-8986-3c44b3c1308c: PCI VMBus probing: Using version 0x10004 Dec 13 14:09:07.526202 kernel: hv_pci 2e5d131a-d5d6-4065-8986-3c44b3c1308c: PCI host bridge to bus d5d6:00 Dec 13 14:09:07.526295 kernel: pci_bus d5d6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 14:09:07.526396 kernel: pci_bus d5d6:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:09:07.526471 kernel: pci d5d6:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 14:09:07.526556 kernel: pci d5d6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:09:07.526631 kernel: pci d5d6:00:02.0: enabling Extended Tags Dec 13 14:09:07.526711 kernel: pci d5d6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d5d6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 14:09:07.526785 kernel: pci_bus d5d6:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:09:07.526884 kernel: pci d5d6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:09:07.597866 kernel: mlx5_core d5d6:00:02.0: firmware version: 16.30.1284 Dec 13 14:09:07.812772 kernel: mlx5_core d5d6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Dec 13 14:09:07.812930 kernel: hv_netvsc 0022487a-06ce-0022-487a-06ce0022487a eth0: VF registering: eth1 Dec 13 14:09:07.813020 kernel: mlx5_core d5d6:00:02.0 eth1: joined to eth0 Dec 13 14:09:07.820869 kernel: mlx5_core d5d6:00:02.0 enP54742s1: renamed from eth1 Dec 13 14:09:07.846769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:09:07.906869 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (530) Dec 13 14:09:07.920592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:09:08.095799 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:09:08.113709 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:09:08.125101 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:09:08.131871 systemd[1]: Starting disk-uuid.service... Dec 13 14:09:08.154895 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:09:08.162861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:09:09.171857 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:09:09.171954 disk-uuid[603]: The operation has completed successfully. Dec 13 14:09:09.222085 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:09:09.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.222181 systemd[1]: Finished disk-uuid.service. Dec 13 14:09:09.236386 systemd[1]: Starting verity-setup.service... Dec 13 14:09:09.275892 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:09:09.447073 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:09:09.453356 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:09:09.464644 systemd[1]: Finished verity-setup.service. Dec 13 14:09:09.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.519764 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:09:09.527278 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:09:09.524028 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:09:09.524779 systemd[1]: Starting ignition-setup.service... Dec 13 14:09:09.533405 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:09:09.566204 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:09:09.566248 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:09:09.570982 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:09:09.621603 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:09:09.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.629000 audit: BPF prog-id=9 op=LOAD Dec 13 14:09:09.631052 systemd[1]: Starting systemd-networkd.service... Dec 13 14:09:09.643645 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:09:09.661019 systemd-networkd[844]: lo: Link UP Dec 13 14:09:09.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.661025 systemd-networkd[844]: lo: Gained carrier Dec 13 14:09:09.661392 systemd-networkd[844]: Enumeration completed Dec 13 14:09:09.661675 systemd[1]: Started systemd-networkd.service. Dec 13 14:09:09.666539 systemd[1]: Reached target network.target. Dec 13 14:09:09.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.669792 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:09:09.705564 iscsid[853]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:09:09.705564 iscsid[853]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:09:09.705564 iscsid[853]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:09:09.705564 iscsid[853]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:09:09.705564 iscsid[853]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:09:09.705564 iscsid[853]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:09:09.705564 iscsid[853]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:09:09.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.675567 systemd[1]: Starting iscsiuio.service... Dec 13 14:09:09.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.686585 systemd[1]: Started iscsiuio.service. Dec 13 14:09:09.697808 systemd[1]: Starting iscsid.service... Dec 13 14:09:09.708988 systemd[1]: Started iscsid.service. Dec 13 14:09:09.738626 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:09:09.779878 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:09:09.792048 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:09:09.800308 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:09:09.805334 systemd[1]: Reached target remote-fs.target. Dec 13 14:09:09.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.816692 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:09:09.840211 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:09:09.866086 kernel: mlx5_core d5d6:00:02.0 enP54742s1: Link up Dec 13 14:09:09.874122 systemd[1]: Finished ignition-setup.service. Dec 13 14:09:09.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:09.879585 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:09:09.908612 kernel: hv_netvsc 0022487a-06ce-0022-487a-06ce0022487a eth0: Data path switched to VF: enP54742s1 Dec 13 14:09:09.909318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:09:09.908954 systemd-networkd[844]: enP54742s1: Link UP Dec 13 14:09:09.909029 systemd-networkd[844]: eth0: Link UP Dec 13 14:09:09.909146 systemd-networkd[844]: eth0: Gained carrier Dec 13 14:09:09.916722 systemd-networkd[844]: enP54742s1: Gained carrier Dec 13 14:09:09.933908 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:09:11.861957 systemd-networkd[844]: eth0: Gained IPv6LL Dec 13 14:09:12.514953 ignition[869]: Ignition 2.14.0 Dec 13 14:09:12.514964 ignition[869]: Stage: fetch-offline Dec 13 14:09:12.515019 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:12.515041 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:12.591884 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:12.592017 ignition[869]: parsed url from cmdline: "" Dec 13 14:09:12.592020 ignition[869]: no config URL provided Dec 13 14:09:12.592025 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:09:12.609025 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:09:12.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.592033 ignition[869]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:09:12.650143 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:09:12.650165 kernel: audit: type=1130 audit(1734098952.619:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.629062 systemd[1]: Starting ignition-fetch.service... Dec 13 14:09:12.592039 ignition[869]: failed to fetch config: resource requires networking Dec 13 14:09:12.592313 ignition[869]: Ignition finished successfully Dec 13 14:09:12.635966 ignition[875]: Ignition 2.14.0 Dec 13 14:09:12.635972 ignition[875]: Stage: fetch Dec 13 14:09:12.636070 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:12.636091 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:12.638966 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:12.639103 ignition[875]: parsed url from cmdline: "" Dec 13 14:09:12.639109 ignition[875]: no config URL provided Dec 13 14:09:12.639115 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:09:12.639121 ignition[875]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:09:12.639147 ignition[875]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:09:12.753442 ignition[875]: GET result: OK Dec 13 14:09:12.753536 ignition[875]: config has been read from IMDS userdata Dec 13 14:09:12.756550 unknown[875]: fetched base config from "system" Dec 13 14:09:12.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.753570 ignition[875]: parsing config with SHA512: 42689d0d52221757ad8e9005f0cefdb344ee560bc408a1618f686e93f0366df01906f9756f623d9741f9b00fed2840a2b43b28cfbc0599d2776b5361bd6401f6 Dec 13 14:09:12.756558 unknown[875]: fetched base config from "system" Dec 13 14:09:12.791440 kernel: audit: type=1130 audit(1734098952.765:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.756959 ignition[875]: fetch: fetch complete Dec 13 14:09:12.756563 unknown[875]: fetched user config from "azure" Dec 13 14:09:12.756964 ignition[875]: fetch: fetch passed Dec 13 14:09:12.758105 systemd[1]: Finished ignition-fetch.service. Dec 13 14:09:12.757012 ignition[875]: Ignition finished successfully Dec 13 14:09:12.836947 kernel: audit: type=1130 audit(1734098952.814:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.785632 systemd[1]: Starting ignition-kargs.service... Dec 13 14:09:12.799140 ignition[881]: Ignition 2.14.0 Dec 13 14:09:12.806505 systemd[1]: Finished ignition-kargs.service. Dec 13 14:09:12.799147 ignition[881]: Stage: kargs Dec 13 14:09:12.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.815802 systemd[1]: Starting ignition-disks.service... Dec 13 14:09:12.799257 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:12.896426 kernel: audit: type=1130 audit(1734098952.849:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:12.846087 systemd[1]: Finished ignition-disks.service. Dec 13 14:09:12.799276 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:12.850511 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:09:12.802075 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:12.875341 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:09:12.804449 ignition[881]: kargs: kargs passed Dec 13 14:09:12.882419 systemd[1]: Reached target local-fs.target. Dec 13 14:09:12.804493 ignition[881]: Ignition finished successfully Dec 13 14:09:12.890694 systemd[1]: Reached target sysinit.target. Dec 13 14:09:12.825505 ignition[887]: Ignition 2.14.0 Dec 13 14:09:12.900479 systemd[1]: Reached target basic.target. Dec 13 14:09:12.825511 ignition[887]: Stage: disks Dec 13 14:09:12.909540 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:09:12.825622 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:12.825640 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:12.828353 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:12.843340 ignition[887]: disks: disks passed Dec 13 14:09:12.843405 ignition[887]: Ignition finished successfully Dec 13 14:09:13.011765 systemd-fsck[895]: ROOT: clean, 621/7326000 files, 481076/7359488 blocks Dec 13 14:09:13.026493 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:09:13.053486 kernel: audit: type=1130 audit(1734098953.030:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.032358 systemd[1]: Mounting sysroot.mount... Dec 13 14:09:13.071869 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:09:13.072597 systemd[1]: Mounted sysroot.mount. Dec 13 14:09:13.076738 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:09:13.157592 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:09:13.162294 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:09:13.170009 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:09:13.170039 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:09:13.176153 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:09:13.239618 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:09:13.244656 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:09:13.267870 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (906) Dec 13 14:09:13.274954 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:09:13.287165 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:09:13.287204 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:09:13.287217 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:09:13.295196 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:09:13.308529 initrd-setup-root[937]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:09:13.330383 initrd-setup-root[945]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:09:13.339095 initrd-setup-root[953]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:09:13.846572 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:09:13.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.874710 systemd[1]: Starting ignition-mount.service... Dec 13 14:09:13.891012 kernel: audit: type=1130 audit(1734098953.851:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.887594 systemd[1]: Starting sysroot-boot.service... Dec 13 14:09:13.896234 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:09:13.896328 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:09:13.925239 ignition[972]: INFO : Ignition 2.14.0 Dec 13 14:09:13.925239 ignition[972]: INFO : Stage: mount Dec 13 14:09:13.946441 ignition[972]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:13.946441 ignition[972]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:13.946441 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:13.946441 ignition[972]: INFO : mount: mount passed Dec 13 14:09:13.946441 ignition[972]: INFO : Ignition finished successfully Dec 13 14:09:14.064754 kernel: audit: type=1130 audit(1734098953.974:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:14.064780 kernel: audit: type=1130 audit(1734098954.005:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:14.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:13.946447 systemd[1]: Finished ignition-mount.service. Dec 13 14:09:13.976399 systemd[1]: Finished sysroot-boot.service. Dec 13 14:09:14.504234 coreos-metadata[905]: Dec 13 14:09:14.504 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:09:14.514059 coreos-metadata[905]: Dec 13 14:09:14.513 INFO Fetch successful Dec 13 14:09:14.547008 coreos-metadata[905]: Dec 13 14:09:14.546 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:09:14.558930 coreos-metadata[905]: Dec 13 14:09:14.558 INFO Fetch successful Dec 13 14:09:14.635272 coreos-metadata[905]: Dec 13 14:09:14.635 INFO wrote hostname ci-3510.3.6-a-478c04130c to /sysroot/etc/hostname Dec 13 14:09:14.645923 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:09:14.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:14.675414 systemd[1]: Starting ignition-files.service... Dec 13 14:09:14.687626 kernel: audit: type=1130 audit(1734098954.651:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:14.689025 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:09:14.709864 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) Dec 13 14:09:14.724548 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:09:14.724591 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:09:14.724601 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:09:14.734382 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:09:14.752139 ignition[1003]: INFO : Ignition 2.14.0 Dec 13 14:09:14.756486 ignition[1003]: INFO : Stage: files Dec 13 14:09:14.756486 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:14.756486 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:14.780766 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:14.780766 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:09:14.780766 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:09:14.780766 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:09:14.874338 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:09:14.883177 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:09:14.883177 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:09:14.875212 unknown[1003]: wrote ssh authorized keys file for user: core Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:09:14.906623 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:09:15.025048 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1006) Dec 13 14:09:14.953959 systemd[1]: mnt-oem1058185953.mount: Deactivated successfully. Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058185953" Dec 13 14:09:15.031253 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058185953": device or resource busy Dec 13 14:09:15.031253 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1058185953", trying btrfs: device or resource busy Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058185953" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058185953" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem1058185953" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1058185953" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3443227889" Dec 13 14:09:15.031253 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3443227889": device or resource busy Dec 13 14:09:15.031253 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3443227889", trying btrfs: device or resource busy Dec 13 14:09:15.031253 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3443227889" Dec 13 14:09:14.986399 systemd[1]: mnt-oem3443227889.mount: Deactivated successfully. Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3443227889" Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3443227889" Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3443227889" Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:09:15.203661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 14:09:15.511800 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 14:09:15.745765 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(12): [started] setting preset to enabled for "waagent.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: op(12): [finished] setting preset to enabled for "waagent.service" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:09:15.759664 ignition[1003]: INFO : files: files passed Dec 13 14:09:15.759664 ignition[1003]: INFO : Ignition finished successfully Dec 13 14:09:15.921404 kernel: audit: type=1130 audit(1734098955.764:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.759625 systemd[1]: Finished ignition-files.service. Dec 13 14:09:15.767530 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:09:15.793126 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:09:15.954885 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:09:15.795010 systemd[1]: Starting ignition-quench.service... Dec 13 14:09:15.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.814650 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:09:15.828409 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:09:15.828492 systemd[1]: Finished ignition-quench.service. Dec 13 14:09:15.840100 systemd[1]: Reached target ignition-complete.target. Dec 13 14:09:15.853707 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:09:15.876065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:09:16.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:15.876166 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:09:15.890537 systemd[1]: Reached target initrd-fs.target. Dec 13 14:09:15.906347 systemd[1]: Reached target initrd.target. Dec 13 14:09:15.916034 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:09:15.916944 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:09:15.964329 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:09:15.970676 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:09:15.992896 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:09:15.998254 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:09:16.007967 systemd[1]: Stopped target timers.target. Dec 13 14:09:16.018495 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:09:16.018553 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:09:16.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.028211 systemd[1]: Stopped target initrd.target. Dec 13 14:09:16.038224 systemd[1]: Stopped target basic.target. Dec 13 14:09:16.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.047256 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:09:16.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.056761 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:09:16.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.066561 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:09:16.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.079806 systemd[1]: Stopped target remote-fs.target. Dec 13 14:09:16.089208 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:09:16.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.099060 systemd[1]: Stopped target sysinit.target. Dec 13 14:09:16.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.243427 ignition[1042]: INFO : Ignition 2.14.0 Dec 13 14:09:16.243427 ignition[1042]: INFO : Stage: umount Dec 13 14:09:16.243427 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:09:16.243427 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:09:16.243427 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:09:16.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.108210 systemd[1]: Stopped target local-fs.target. Dec 13 14:09:16.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.309713 ignition[1042]: INFO : umount: umount passed Dec 13 14:09:16.309713 ignition[1042]: INFO : Ignition finished successfully Dec 13 14:09:16.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.117465 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:09:16.127095 systemd[1]: Stopped target swap.target. Dec 13 14:09:16.135471 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:09:16.135534 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:09:16.146044 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:09:16.155246 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:09:16.155295 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:09:16.163897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:09:16.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.163934 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:09:16.174086 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:09:16.174119 systemd[1]: Stopped ignition-files.service. Dec 13 14:09:16.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.183573 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:09:16.183611 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:09:16.194674 systemd[1]: Stopping ignition-mount.service... Dec 13 14:09:16.203907 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:09:16.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.208231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:09:16.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.208309 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:09:16.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.226454 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:09:16.226512 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:09:16.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.238221 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:09:16.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.238324 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:09:16.513000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:09:16.253693 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:09:16.253793 systemd[1]: Stopped ignition-mount.service. Dec 13 14:09:16.262411 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:09:16.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.262461 systemd[1]: Stopped ignition-disks.service. Dec 13 14:09:16.573472 kernel: hv_netvsc 0022487a-06ce-0022-487a-06ce0022487a eth0: Data path switched from VF: enP54742s1 Dec 13 14:09:16.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.274818 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:09:16.274865 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:09:16.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.294243 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:09:16.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.294281 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:09:16.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.305965 systemd[1]: Stopped target network.target. Dec 13 14:09:16.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.313839 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:09:16.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.313915 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:09:16.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.322464 systemd[1]: Stopped target paths.target. Dec 13 14:09:16.331701 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:09:16.339862 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:09:16.350940 systemd[1]: Stopped target slices.target. Dec 13 14:09:16.355129 systemd[1]: Stopped target sockets.target. Dec 13 14:09:16.363879 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:09:16.363920 systemd[1]: Closed iscsid.socket. Dec 13 14:09:16.372656 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:09:16.372683 systemd[1]: Closed iscsiuio.socket. Dec 13 14:09:16.382893 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:09:16.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:16.382944 systemd[1]: Stopped ignition-setup.service. Dec 13 14:09:16.392360 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:09:16.404482 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:09:16.409769 systemd-networkd[844]: eth0: DHCPv6 lease lost Dec 13 14:09:16.716000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:09:16.416475 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:09:16.416967 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:09:16.417058 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:09:16.425889 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:09:16.425922 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:09:16.441368 systemd[1]: Stopping network-cleanup.service... Dec 13 14:09:16.451854 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:09:16.764436 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Dec 13 14:09:16.764469 iscsid[853]: iscsid shutting down. Dec 13 14:09:16.451927 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:09:16.461348 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:09:16.461396 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:09:16.475507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:09:16.475546 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:09:16.480492 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:09:16.489483 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:09:16.489982 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:09:16.490070 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:09:16.500233 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:09:16.500357 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:09:16.509755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:09:16.509800 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:09:16.523928 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:09:16.523964 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:09:16.533329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:09:16.533376 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:09:16.544058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:09:16.544099 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:09:16.548537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:09:16.548574 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:09:16.572499 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:09:16.583701 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:09:16.583759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:09:16.592409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:09:16.592449 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:09:16.597069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:09:16.597104 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:09:16.608016 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:09:16.608497 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:09:16.608584 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:09:16.615126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:09:16.615194 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:09:16.623808 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:09:16.623855 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:09:16.673051 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:09:16.673138 systemd[1]: Stopped network-cleanup.service. Dec 13 14:09:16.680940 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:09:16.691906 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:09:16.712614 systemd[1]: Switching root. Dec 13 14:09:16.765557 systemd-journald[276]: Journal stopped Dec 13 14:09:28.404277 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:09:28.404297 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:09:28.404308 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:09:28.404317 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:09:28.404325 kernel: SELinux: policy capability open_perms=1 Dec 13 14:09:28.404333 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:09:28.404342 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:09:28.404350 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:09:28.404358 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:09:28.404365 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:09:28.404373 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:09:28.404385 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 13 14:09:28.404394 kernel: audit: type=1403 audit(1734098958.976:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:09:28.404404 systemd[1]: Successfully loaded SELinux policy in 274.086ms. Dec 13 14:09:28.404414 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.148ms. Dec 13 14:09:28.404426 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:09:28.404435 systemd[1]: Detected virtualization microsoft. Dec 13 14:09:28.404444 systemd[1]: Detected architecture arm64. Dec 13 14:09:28.404453 systemd[1]: Detected first boot. Dec 13 14:09:28.404462 systemd[1]: Hostname set to . Dec 13 14:09:28.404471 systemd[1]: Initializing machine ID from random generator. Dec 13 14:09:28.404481 kernel: audit: type=1400 audit(1734098959.722:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:09:28.404491 kernel: audit: type=1400 audit(1734098959.722:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:09:28.404500 kernel: audit: type=1334 audit(1734098959.742:83): prog-id=10 op=LOAD Dec 13 14:09:28.404508 kernel: audit: type=1334 audit(1734098959.742:84): prog-id=10 op=UNLOAD Dec 13 14:09:28.404517 kernel: audit: type=1334 audit(1734098959.761:85): prog-id=11 op=LOAD Dec 13 14:09:28.404525 kernel: audit: type=1334 audit(1734098959.761:86): prog-id=11 op=UNLOAD Dec 13 14:09:28.404534 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:09:28.404543 kernel: audit: type=1400 audit(1734098960.858:87): avc: denied { associate } for pid=1076 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:09:28.404554 kernel: audit: type=1300 audit(1734098960.858:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1059 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:28.404564 kernel: audit: type=1327 audit(1734098960.858:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:09:28.404573 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:09:28.404585 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:09:28.404595 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:09:28.404606 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:09:28.404616 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 14:09:28.404624 kernel: audit: type=1334 audit(1734098967.629:89): prog-id=12 op=LOAD Dec 13 14:09:28.404633 kernel: audit: type=1334 audit(1734098967.629:90): prog-id=3 op=UNLOAD Dec 13 14:09:28.404641 kernel: audit: type=1334 audit(1734098967.634:91): prog-id=13 op=LOAD Dec 13 14:09:28.404650 kernel: audit: type=1334 audit(1734098967.641:92): prog-id=14 op=LOAD Dec 13 14:09:28.404661 kernel: audit: type=1334 audit(1734098967.641:93): prog-id=4 op=UNLOAD Dec 13 14:09:28.404670 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:09:28.404679 kernel: audit: type=1334 audit(1734098967.641:94): prog-id=5 op=UNLOAD Dec 13 14:09:28.404688 systemd[1]: Stopped iscsiuio.service. Dec 13 14:09:28.404698 kernel: audit: type=1334 audit(1734098967.646:95): prog-id=15 op=LOAD Dec 13 14:09:28.404707 kernel: audit: type=1334 audit(1734098967.646:96): prog-id=12 op=UNLOAD Dec 13 14:09:28.404716 kernel: audit: type=1334 audit(1734098967.652:97): prog-id=16 op=LOAD Dec 13 14:09:28.404725 kernel: audit: type=1334 audit(1734098967.658:98): prog-id=17 op=LOAD Dec 13 14:09:28.404734 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:09:28.404743 systemd[1]: Stopped iscsid.service. Dec 13 14:09:28.404752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:09:28.404762 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:09:28.404772 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:09:28.404782 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:09:28.404792 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:09:28.404801 systemd[1]: Created slice system-getty.slice. Dec 13 14:09:28.404810 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:09:28.404820 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:09:28.404829 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:09:28.404839 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:09:28.404866 systemd[1]: Created slice user.slice. Dec 13 14:09:28.404875 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:09:28.404884 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:09:28.404894 systemd[1]: Set up automount boot.automount. Dec 13 14:09:28.404903 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:09:28.404912 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:09:28.404922 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:09:28.404931 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:09:28.404941 systemd[1]: Reached target integritysetup.target. Dec 13 14:09:28.404951 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:09:28.404960 systemd[1]: Reached target remote-fs.target. Dec 13 14:09:28.404969 systemd[1]: Reached target slices.target. Dec 13 14:09:28.404978 systemd[1]: Reached target swap.target. Dec 13 14:09:28.404988 systemd[1]: Reached target torcx.target. Dec 13 14:09:28.405001 systemd[1]: Reached target veritysetup.target. Dec 13 14:09:28.405010 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:09:28.405019 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:09:28.405029 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:09:28.405038 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:09:28.405047 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:09:28.405057 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:09:28.405066 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:09:28.405077 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:09:28.405086 systemd[1]: Mounting media.mount... Dec 13 14:09:28.405096 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:09:28.405105 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:09:28.405114 systemd[1]: Mounting tmp.mount... Dec 13 14:09:28.405124 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:09:28.405134 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:09:28.405143 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:09:28.405152 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:09:28.405163 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:09:28.405172 systemd[1]: Starting modprobe@drm.service... Dec 13 14:09:28.405182 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:09:28.405191 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:09:28.405202 systemd[1]: Starting modprobe@loop.service... Dec 13 14:09:28.405212 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:09:28.405221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:09:28.405231 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:09:28.405241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:09:28.405251 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:09:28.405260 kernel: fuse: init (API version 7.34) Dec 13 14:09:28.405269 systemd[1]: Stopped systemd-journald.service. Dec 13 14:09:28.405278 kernel: loop: module loaded Dec 13 14:09:28.405287 systemd[1]: systemd-journald.service: Consumed 2.876s CPU time. Dec 13 14:09:28.405296 systemd[1]: Starting systemd-journald.service... Dec 13 14:09:28.405306 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:09:28.405315 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:09:28.405326 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:09:28.405335 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:09:28.405345 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:09:28.405354 systemd[1]: Stopped verity-setup.service. Dec 13 14:09:28.405364 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:09:28.405373 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:09:28.405382 systemd[1]: Mounted media.mount. Dec 13 14:09:28.405392 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:09:28.405402 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:09:28.405416 systemd-journald[1183]: Journal started Dec 13 14:09:28.405455 systemd-journald[1183]: Runtime Journal (/run/log/journal/3ab950f43849431db425956c002677f2) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:09:18.976000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:09:19.722000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:09:19.722000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:09:19.742000 audit: BPF prog-id=10 op=LOAD Dec 13 14:09:19.742000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:09:19.761000 audit: BPF prog-id=11 op=LOAD Dec 13 14:09:19.761000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:09:20.858000 audit[1076]: AVC avc: denied { associate } for pid=1076 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:09:20.858000 audit[1076]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1059 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:20.858000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:09:20.867000 audit[1076]: AVC avc: denied { associate } for pid=1076 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:09:20.867000 audit[1076]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1059 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:20.867000 audit: CWD cwd="/" Dec 13 14:09:20.867000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:20.867000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:20.867000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:09:27.629000 audit: BPF prog-id=12 op=LOAD Dec 13 14:09:27.629000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:09:27.634000 audit: BPF prog-id=13 op=LOAD Dec 13 14:09:27.641000 audit: BPF prog-id=14 op=LOAD Dec 13 14:09:27.641000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:09:27.641000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:09:27.646000 audit: BPF prog-id=15 op=LOAD Dec 13 14:09:27.646000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:09:27.652000 audit: BPF prog-id=16 op=LOAD Dec 13 14:09:27.658000 audit: BPF prog-id=17 op=LOAD Dec 13 14:09:27.658000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:09:27.658000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:09:27.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:27.702000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:09:27.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:27.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:27.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:27.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.291000 audit: BPF prog-id=18 op=LOAD Dec 13 14:09:28.291000 audit: BPF prog-id=19 op=LOAD Dec 13 14:09:28.291000 audit: BPF prog-id=20 op=LOAD Dec 13 14:09:28.291000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:09:28.291000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:09:28.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.398000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:09:28.398000 audit[1183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe5afdc20 a2=4000 a3=1 items=0 ppid=1 pid=1183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:28.398000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:09:27.629089 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:09:20.842806 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:09:27.659995 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:09:20.843142 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:09:27.660368 systemd[1]: systemd-journald.service: Consumed 2.876s CPU time. Dec 13 14:09:20.843160 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:09:20.843198 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:09:20.843208 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:09:20.843239 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:09:20.843250 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:09:20.843447 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:09:20.843480 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:09:20.843491 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:09:20.843869 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:09:20.843904 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:09:20.843923 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:09:20.843937 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:09:20.843955 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:09:20.843968 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:09:26.381880 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:09:26.382137 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:09:26.382242 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:09:26.382417 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:09:26.382468 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:09:26.382521 /usr/lib/systemd/system-generators/torcx-generator[1076]: time="2024-12-13T14:09:26Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:09:28.417860 systemd[1]: Started systemd-journald.service. Dec 13 14:09:28.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.429079 systemd[1]: Mounted tmp.mount. Dec 13 14:09:28.434474 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:09:28.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.439785 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:09:28.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.445096 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:09:28.445231 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:09:28.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.450529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:09:28.450853 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:09:28.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.456044 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:09:28.456162 systemd[1]: Finished modprobe@drm.service. Dec 13 14:09:28.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.461446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:09:28.461592 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:09:28.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.467041 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:09:28.467157 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:09:28.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.472368 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:09:28.472504 systemd[1]: Finished modprobe@loop.service. Dec 13 14:09:28.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.477411 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:09:28.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.482920 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:09:28.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.488486 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:09:28.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.494353 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:09:28.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.500215 systemd[1]: Reached target network-pre.target. Dec 13 14:09:28.506265 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:09:28.511835 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:09:28.515960 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:09:28.517466 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:09:28.523257 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:09:28.527874 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:09:28.528871 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:09:28.533311 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:09:28.534265 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:09:28.539494 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:09:28.544727 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:09:28.551207 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:09:28.557085 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:09:28.564385 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:09:28.569142 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:09:28.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.574616 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:09:28.589879 systemd-journald[1183]: Time spent on flushing to /var/log/journal/3ab950f43849431db425956c002677f2 is 13.460ms for 1079 entries. Dec 13 14:09:28.589879 systemd-journald[1183]: System Journal (/var/log/journal/3ab950f43849431db425956c002677f2) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:09:28.658470 systemd-journald[1183]: Received client request to flush runtime journal. Dec 13 14:09:28.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:28.632247 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:09:28.659377 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:09:28.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:29.182543 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:09:29.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:29.191096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:09:29.440145 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:09:29.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:29.571237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:09:29.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:29.576000 audit: BPF prog-id=21 op=LOAD Dec 13 14:09:29.576000 audit: BPF prog-id=22 op=LOAD Dec 13 14:09:29.576000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:09:29.576000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:09:29.578162 systemd[1]: Starting systemd-udevd.service... Dec 13 14:09:29.596026 systemd-udevd[1202]: Using default interface naming scheme 'v252'. Dec 13 14:09:29.936733 systemd[1]: Started systemd-udevd.service. Dec 13 14:09:29.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:29.949000 audit: BPF prog-id=23 op=LOAD Dec 13 14:09:29.950627 systemd[1]: Starting systemd-networkd.service... Dec 13 14:09:29.975657 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:09:30.016962 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:09:30.015000 audit: BPF prog-id=24 op=LOAD Dec 13 14:09:30.015000 audit: BPF prog-id=25 op=LOAD Dec 13 14:09:30.015000 audit: BPF prog-id=26 op=LOAD Dec 13 14:09:30.028000 audit[1204]: AVC avc: denied { confidentiality } for pid=1204 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:09:30.058928 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:09:30.080654 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:09:30.080742 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:09:30.080758 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:09:30.080772 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:09:30.106469 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:09:30.106554 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:09:30.106569 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:09:30.106582 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:09:30.106596 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 14:09:30.111054 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:09:30.111494 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:09:30.209467 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:09:30.213274 systemd[1]: Started systemd-userdbd.service. Dec 13 14:09:30.228690 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:09:30.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:30.028000 audit[1204]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaef9179f0 a1=aa2c a2=ffffb62624b0 a3=aaaaef879010 items=12 ppid=1202 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:30.028000 audit: CWD cwd="/" Dec 13 14:09:30.028000 audit: PATH item=0 name=(null) inode=5672 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=1 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=2 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=3 name=(null) inode=10943 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=4 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=5 name=(null) inode=10944 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=6 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=7 name=(null) inode=10945 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=8 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=9 name=(null) inode=10946 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=10 name=(null) inode=10942 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PATH item=11 name=(null) inode=10947 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:09:30.028000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:09:30.424502 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1208) Dec 13 14:09:30.440308 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:09:30.449793 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:09:30.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:30.456245 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:09:30.493427 systemd-networkd[1222]: lo: Link UP Dec 13 14:09:30.493463 systemd-networkd[1222]: lo: Gained carrier Dec 13 14:09:30.493867 systemd-networkd[1222]: Enumeration completed Dec 13 14:09:30.493968 systemd[1]: Started systemd-networkd.service. Dec 13 14:09:30.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:30.500403 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:09:30.527028 systemd-networkd[1222]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:09:30.577470 kernel: mlx5_core d5d6:00:02.0 enP54742s1: Link up Dec 13 14:09:30.603335 systemd-networkd[1222]: enP54742s1: Link UP Dec 13 14:09:30.603552 kernel: hv_netvsc 0022487a-06ce-0022-487a-06ce0022487a eth0: Data path switched to VF: enP54742s1 Dec 13 14:09:30.603467 systemd-networkd[1222]: eth0: Link UP Dec 13 14:09:30.603471 systemd-networkd[1222]: eth0: Gained carrier Dec 13 14:09:30.612709 systemd-networkd[1222]: enP54742s1: Gained carrier Dec 13 14:09:30.623567 systemd-networkd[1222]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:09:30.791150 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:09:30.814379 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:09:30.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:30.819726 systemd[1]: Reached target cryptsetup.target. Dec 13 14:09:30.825381 systemd[1]: Starting lvm2-activation.service... Dec 13 14:09:30.829276 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:09:30.854362 systemd[1]: Finished lvm2-activation.service. Dec 13 14:09:30.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:30.859523 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:09:30.864289 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:09:30.864318 systemd[1]: Reached target local-fs.target. Dec 13 14:09:30.869180 systemd[1]: Reached target machines.target. Dec 13 14:09:30.874903 systemd[1]: Starting ldconfig.service... Dec 13 14:09:30.879027 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:09:30.879095 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:30.880288 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:09:30.886617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:09:30.893815 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:09:30.899865 systemd[1]: Starting systemd-sysext.service... Dec 13 14:09:30.925958 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1283 (bootctl) Dec 13 14:09:30.927137 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:09:31.194216 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:09:31.244285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:09:31.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.258991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:09:31.259646 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:09:31.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.265338 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:09:31.265530 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:09:31.311466 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 14:09:31.338466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:09:31.361476 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 14:09:31.365640 (sd-sysext)[1295]: Using extensions 'kubernetes'. Dec 13 14:09:31.365959 (sd-sysext)[1295]: Merged extensions into '/usr'. Dec 13 14:09:31.385139 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:09:31.389186 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.390389 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:09:31.395845 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:09:31.401548 systemd[1]: Starting modprobe@loop.service... Dec 13 14:09:31.405378 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.405524 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:31.407899 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:09:31.412571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:09:31.412696 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:09:31.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.417846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:09:31.417970 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:09:31.423323 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:09:31.423423 systemd[1]: Finished modprobe@loop.service. Dec 13 14:09:31.428109 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:09:31.428199 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.430290 systemd-fsck[1291]: fsck.fat 4.2 (2021-01-31) Dec 13 14:09:31.430290 systemd-fsck[1291]: /dev/sda1: 236 files, 117175/258078 clusters Dec 13 14:09:31.430828 systemd[1]: Finished systemd-sysext.service. Dec 13 14:09:31.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.436018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:09:31.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.443401 systemd[1]: Mounting boot.mount... Dec 13 14:09:31.451170 systemd[1]: Starting ensure-sysext.service... Dec 13 14:09:31.456654 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:09:31.463923 systemd[1]: Mounted boot.mount. Dec 13 14:09:31.470954 systemd[1]: Reloading. Dec 13 14:09:31.478295 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:09:31.495377 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:09:31.529565 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:09:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:09:31.533614 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:09:31Z" level=info msg="torcx already run" Dec 13 14:09:31.602818 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:09:31.602838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:09:31.617816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:09:31.652471 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:09:31.680000 audit: BPF prog-id=27 op=LOAD Dec 13 14:09:31.680000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:09:31.680000 audit: BPF prog-id=28 op=LOAD Dec 13 14:09:31.680000 audit: BPF prog-id=29 op=LOAD Dec 13 14:09:31.680000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:09:31.680000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:09:31.681000 audit: BPF prog-id=30 op=LOAD Dec 13 14:09:31.681000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:09:31.681000 audit: BPF prog-id=31 op=LOAD Dec 13 14:09:31.681000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:09:31.682000 audit: BPF prog-id=32 op=LOAD Dec 13 14:09:31.682000 audit: BPF prog-id=33 op=LOAD Dec 13 14:09:31.682000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:09:31.682000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:09:31.682000 audit: BPF prog-id=34 op=LOAD Dec 13 14:09:31.682000 audit: BPF prog-id=35 op=LOAD Dec 13 14:09:31.682000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:09:31.682000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:09:31.691680 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:09:31.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.704177 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.705331 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:09:31.710388 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:09:31.715413 systemd[1]: Starting modprobe@loop.service... Dec 13 14:09:31.719398 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.719534 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:31.720301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:09:31.720436 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:09:31.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.725247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:09:31.725369 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:09:31.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.730339 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:09:31.730497 systemd[1]: Finished modprobe@loop.service. Dec 13 14:09:31.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.736905 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.738190 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:09:31.743399 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:09:31.749099 systemd[1]: Starting modprobe@loop.service... Dec 13 14:09:31.753067 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.753202 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:31.754033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:09:31.754179 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:09:31.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.759034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:09:31.759153 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:09:31.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.764180 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:09:31.764299 systemd[1]: Finished modprobe@loop.service. Dec 13 14:09:31.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.771221 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.772344 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:09:31.777159 systemd[1]: Starting modprobe@drm.service... Dec 13 14:09:31.782245 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:09:31.787494 systemd[1]: Starting modprobe@loop.service... Dec 13 14:09:31.791169 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.791283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:31.792362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:09:31.792504 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:09:31.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.797979 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:09:31.798103 systemd[1]: Finished modprobe@drm.service. Dec 13 14:09:31.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.802806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:09:31.802923 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:09:31.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.807845 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:09:31.807961 systemd[1]: Finished modprobe@loop.service. Dec 13 14:09:31.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:31.815250 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:09:31.815319 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:09:31.816288 systemd[1]: Finished ensure-sysext.service. Dec 13 14:09:31.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.143467 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:09:32.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.149657 systemd[1]: Starting audit-rules.service... Dec 13 14:09:32.154379 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:09:32.159760 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:09:32.163000 audit: BPF prog-id=36 op=LOAD Dec 13 14:09:32.166272 systemd[1]: Starting systemd-resolved.service... Dec 13 14:09:32.170000 audit: BPF prog-id=37 op=LOAD Dec 13 14:09:32.173021 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:09:32.178068 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:09:32.208000 audit[1402]: SYSTEM_BOOT pid=1402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.215717 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:09:32.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.225771 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:09:32.227415 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:09:32.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.245580 systemd-networkd[1222]: eth0: Gained IPv6LL Dec 13 14:09:32.252721 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:09:32.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.266351 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:09:32.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.271047 systemd[1]: Reached target time-set.target. Dec 13 14:09:32.302151 systemd-resolved[1399]: Positive Trust Anchors: Dec 13 14:09:32.302477 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:09:32.302569 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:09:32.305925 systemd-resolved[1399]: Using system hostname 'ci-3510.3.6-a-478c04130c'. Dec 13 14:09:32.307454 systemd[1]: Started systemd-resolved.service. Dec 13 14:09:32.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.311972 systemd[1]: Reached target network.target. Dec 13 14:09:32.316916 systemd[1]: Reached target network-online.target. Dec 13 14:09:32.323153 systemd[1]: Reached target nss-lookup.target. Dec 13 14:09:32.413000 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:09:32.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:09:32.548729 systemd-timesyncd[1401]: Contacted time server 216.31.16.12:123 (0.flatcar.pool.ntp.org). Dec 13 14:09:32.548804 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2024-12-13 14:09:32.547280 UTC. Dec 13 14:09:32.548000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:09:32.548000 audit[1417]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe094e8d0 a2=420 a3=0 items=0 ppid=1396 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:32.548000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:09:32.567495 augenrules[1417]: No rules Dec 13 14:09:32.568533 systemd[1]: Finished audit-rules.service. Dec 13 14:09:38.202729 ldconfig[1282]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:09:38.216868 systemd[1]: Finished ldconfig.service. Dec 13 14:09:38.224236 systemd[1]: Starting systemd-update-done.service... Dec 13 14:09:38.259872 systemd[1]: Finished systemd-update-done.service. Dec 13 14:09:38.265272 systemd[1]: Reached target sysinit.target. Dec 13 14:09:38.270280 systemd[1]: Started motdgen.path. Dec 13 14:09:38.275091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:09:38.282879 systemd[1]: Started logrotate.timer. Dec 13 14:09:38.287841 systemd[1]: Started mdadm.timer. Dec 13 14:09:38.291983 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:09:38.297492 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:09:38.297523 systemd[1]: Reached target paths.target. Dec 13 14:09:38.302314 systemd[1]: Reached target timers.target. Dec 13 14:09:38.307514 systemd[1]: Listening on dbus.socket. Dec 13 14:09:38.312957 systemd[1]: Starting docker.socket... Dec 13 14:09:38.319575 systemd[1]: Listening on sshd.socket. Dec 13 14:09:38.324145 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:38.324603 systemd[1]: Listening on docker.socket. Dec 13 14:09:38.329279 systemd[1]: Reached target sockets.target. Dec 13 14:09:38.334105 systemd[1]: Reached target basic.target. Dec 13 14:09:38.338551 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:09:38.338579 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:09:38.339667 systemd[1]: Starting containerd.service... Dec 13 14:09:38.344651 systemd[1]: Starting dbus.service... Dec 13 14:09:38.349209 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:09:38.355107 systemd[1]: Starting extend-filesystems.service... Dec 13 14:09:38.359855 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:09:38.360945 systemd[1]: Starting kubelet.service... Dec 13 14:09:38.365901 systemd[1]: Starting motdgen.service... Dec 13 14:09:38.370934 systemd[1]: Started nvidia.service. Dec 13 14:09:38.376645 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:09:38.383592 systemd[1]: Starting sshd-keygen.service... Dec 13 14:09:38.389750 systemd[1]: Starting systemd-logind.service... Dec 13 14:09:38.395264 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:09:38.395324 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:09:38.395723 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:09:38.396996 systemd[1]: Starting update-engine.service... Dec 13 14:09:38.404797 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:09:38.406593 jq[1427]: false Dec 13 14:09:38.412589 jq[1444]: true Dec 13 14:09:38.415688 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:09:38.415862 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:09:38.423353 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:09:38.423585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:09:38.443278 extend-filesystems[1428]: Found loop1 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda1 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda2 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda3 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found usr Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda4 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda6 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda7 Dec 13 14:09:38.443278 extend-filesystems[1428]: Found sda9 Dec 13 14:09:38.443278 extend-filesystems[1428]: Checking size of /dev/sda9 Dec 13 14:09:38.463483 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:09:38.512974 jq[1447]: true Dec 13 14:09:38.463654 systemd[1]: Finished motdgen.service. Dec 13 14:09:38.530240 env[1449]: time="2024-12-13T14:09:38.530125541Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:09:38.543338 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 14:09:38.548538 systemd-logind[1437]: New seat seat0. Dec 13 14:09:38.592427 extend-filesystems[1428]: Old size kept for /dev/sda9 Dec 13 14:09:38.592427 extend-filesystems[1428]: Found sr0 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.601898339Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.602045411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612624534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612668051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612915916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612932435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612945954Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.612955914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.623362767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.626794 env[1449]: time="2024-12-13T14:09:38.624683088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:09:38.596670 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:09:38.627342 env[1449]: time="2024-12-13T14:09:38.624877556Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:09:38.627342 env[1449]: time="2024-12-13T14:09:38.624896835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:09:38.627342 env[1449]: time="2024-12-13T14:09:38.624967391Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:09:38.627342 env[1449]: time="2024-12-13T14:09:38.624980590Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:09:38.596841 systemd[1]: Finished extend-filesystems.service. Dec 13 14:09:38.630869 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:09:38.631634 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:09:38.640867 dbus-daemon[1426]: [system] SELinux support is enabled Dec 13 14:09:38.641014 systemd[1]: Started dbus.service. Dec 13 14:09:38.649205 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:09:38.649239 systemd[1]: Reached target system-config.target. Dec 13 14:09:38.655613 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.657965084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658009961Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658024160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658063318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658079397Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658095156Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658108115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658466014Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658489092Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658503091Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658517011Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658529330Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658657242Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:09:38.659463 env[1449]: time="2024-12-13T14:09:38.658726958Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:09:38.658336 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.658936825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.658958504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.658973703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659014861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659027420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659038699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659049178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659061258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659073937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659084736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659096096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659109495Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659219128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659233287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.659857 env[1449]: time="2024-12-13T14:09:38.659244647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.658353 systemd[1]: Reached target user-config.target. Dec 13 14:09:38.660307 env[1449]: time="2024-12-13T14:09:38.659257046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:09:38.660307 env[1449]: time="2024-12-13T14:09:38.659272485Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:09:38.660307 env[1449]: time="2024-12-13T14:09:38.659284244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:09:38.660307 env[1449]: time="2024-12-13T14:09:38.659301883Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:09:38.660307 env[1449]: time="2024-12-13T14:09:38.659335841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:09:38.661465 env[1449]: time="2024-12-13T14:09:38.660594605Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:09:38.661465 env[1449]: time="2024-12-13T14:09:38.660665641Z" level=info msg="Connect containerd service" Dec 13 14:09:38.661465 env[1449]: time="2024-12-13T14:09:38.660704519Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:09:38.661465 env[1449]: time="2024-12-13T14:09:38.661267845Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.661724097Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.661768615Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.661808252Z" level=info msg="containerd successfully booted in 0.132401s" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.671994159Z" level=info msg="Start subscribing containerd event" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.672052476Z" level=info msg="Start recovering state" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.672123311Z" level=info msg="Start event monitor" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.672143070Z" level=info msg="Start snapshots syncer" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.672153430Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:09:38.681508 env[1449]: time="2024-12-13T14:09:38.672161189Z" level=info msg="Start streaming server" Dec 13 14:09:38.665752 systemd[1]: Started containerd.service. Dec 13 14:09:38.672511 systemd[1]: Started systemd-logind.service. Dec 13 14:09:38.701008 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:09:39.033943 update_engine[1442]: I1213 14:09:39.018693 1442 main.cc:92] Flatcar Update Engine starting Dec 13 14:09:39.079500 systemd[1]: Started update-engine.service. Dec 13 14:09:39.081693 update_engine[1442]: I1213 14:09:39.081588 1442 update_check_scheduler.cc:74] Next update check in 6m53s Dec 13 14:09:39.086566 systemd[1]: Started locksmithd.service. Dec 13 14:09:39.199116 systemd[1]: Started kubelet.service. Dec 13 14:09:39.654678 kubelet[1529]: E1213 14:09:39.654635 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:39.656321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:39.656439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:40.338749 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:09:41.822250 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:09:41.845592 systemd[1]: Finished sshd-keygen.service. Dec 13 14:09:41.852906 systemd[1]: Starting issuegen.service... Dec 13 14:09:41.857765 systemd[1]: Started waagent.service. Dec 13 14:09:41.862240 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:09:41.862393 systemd[1]: Finished issuegen.service. Dec 13 14:09:41.868218 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:09:41.892967 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:09:41.900257 systemd[1]: Started getty@tty1.service. Dec 13 14:09:41.906800 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:09:41.912219 systemd[1]: Reached target getty.target. Dec 13 14:09:41.918237 systemd[1]: Reached target multi-user.target. Dec 13 14:09:41.924340 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:09:41.932683 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:09:41.932845 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:09:41.939519 systemd[1]: Startup finished in 725ms (kernel) + 12.897s (initrd) + 23.302s (userspace) = 36.925s. Dec 13 14:09:42.564348 login[1553]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:09:42.565941 login[1554]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:09:42.617889 systemd[1]: Created slice user-500.slice. Dec 13 14:09:42.618978 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:09:42.622049 systemd-logind[1437]: New session 2 of user core. Dec 13 14:09:42.625057 systemd-logind[1437]: New session 1 of user core. Dec 13 14:09:42.656881 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:09:42.658311 systemd[1]: Starting user@500.service... Dec 13 14:09:42.688173 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:09:42.951236 systemd[1557]: Queued start job for default target default.target. Dec 13 14:09:42.952848 systemd[1557]: Reached target paths.target. Dec 13 14:09:42.953034 systemd[1557]: Reached target sockets.target. Dec 13 14:09:42.953170 systemd[1557]: Reached target timers.target. Dec 13 14:09:42.953294 systemd[1557]: Reached target basic.target. Dec 13 14:09:42.953510 systemd[1557]: Reached target default.target. Dec 13 14:09:42.953562 systemd[1]: Started user@500.service. Dec 13 14:09:42.954420 systemd[1]: Started session-1.scope. Dec 13 14:09:42.954852 systemd[1557]: Startup finished in 260ms. Dec 13 14:09:42.955005 systemd[1]: Started session-2.scope. Dec 13 14:09:47.744864 waagent[1550]: 2024-12-13T14:09:47.744750Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:09:47.752500 waagent[1550]: 2024-12-13T14:09:47.752395Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:09:47.758037 waagent[1550]: 2024-12-13T14:09:47.757961Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:09:47.763496 waagent[1550]: 2024-12-13T14:09:47.763398Z INFO Daemon Daemon Run daemon Dec 13 14:09:47.769072 waagent[1550]: 2024-12-13T14:09:47.769001Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:09:47.788308 waagent[1550]: 2024-12-13T14:09:47.788161Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:09:47.806828 waagent[1550]: 2024-12-13T14:09:47.806681Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:09:47.818332 waagent[1550]: 2024-12-13T14:09:47.818230Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:09:47.824816 waagent[1550]: 2024-12-13T14:09:47.824721Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:09:47.832092 waagent[1550]: 2024-12-13T14:09:47.832009Z INFO Daemon Daemon Activate resource disk Dec 13 14:09:47.837526 waagent[1550]: 2024-12-13T14:09:47.837422Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:09:47.853076 waagent[1550]: 2024-12-13T14:09:47.852983Z INFO Daemon Daemon Found device: None Dec 13 14:09:47.858002 waagent[1550]: 2024-12-13T14:09:47.857916Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:09:47.866904 waagent[1550]: 2024-12-13T14:09:47.866817Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:09:47.879516 waagent[1550]: 2024-12-13T14:09:47.879413Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:09:47.886352 waagent[1550]: 2024-12-13T14:09:47.886266Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:09:47.900242 waagent[1550]: 2024-12-13T14:09:47.900096Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:09:47.916915 waagent[1550]: 2024-12-13T14:09:47.916779Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:09:47.928002 waagent[1550]: 2024-12-13T14:09:47.927931Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:09:47.934354 waagent[1550]: 2024-12-13T14:09:47.934285Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:09:48.024890 waagent[1550]: 2024-12-13T14:09:48.024695Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:09:48.081335 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:09:48.116225 waagent[1550]: 2024-12-13T14:09:48.116078Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:09:48.121842 waagent[1550]: 2024-12-13T14:09:48.121763Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:09:48.127841 waagent[1550]: 2024-12-13T14:09:48.127771Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:09:48.135217 waagent[1550]: 2024-12-13T14:09:48.135152Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:09:48.141377 waagent[1550]: 2024-12-13T14:09:48.141317Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:09:48.147423 waagent[1550]: 2024-12-13T14:09:48.147359Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:09:48.238662 waagent[1550]: 2024-12-13T14:09:48.238591Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:09:48.246314 waagent[1550]: 2024-12-13T14:09:48.246269Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:09:48.252220 waagent[1550]: 2024-12-13T14:09:48.252152Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:09:48.722804 waagent[1550]: 2024-12-13T14:09:48.722651Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:09:48.738052 waagent[1550]: 2024-12-13T14:09:48.737982Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:09:48.746153 waagent[1550]: 2024-12-13T14:09:48.746066Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:09:48.833164 waagent[1550]: 2024-12-13T14:09:48.833035Z INFO Daemon Daemon Found private key matching thumbprint 051F7F177B84541EE31F9AEB037AD5D522D0082B Dec 13 14:09:48.842121 waagent[1550]: 2024-12-13T14:09:48.842040Z INFO Daemon Daemon Certificate with thumbprint E9F66D59FCF4B1564C12E43C47A3B91C6663F895 has no matching private key. Dec 13 14:09:48.852259 waagent[1550]: 2024-12-13T14:09:48.852186Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:09:48.878214 waagent[1550]: 2024-12-13T14:09:48.878144Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 64b9b3b2-71d3-41e1-88a6-4fabf12c0e1c New eTag: 1165144176566950372] Dec 13 14:09:48.890294 waagent[1550]: 2024-12-13T14:09:48.890210Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:09:48.906371 waagent[1550]: 2024-12-13T14:09:48.906288Z INFO Daemon Daemon Starting provisioning Dec 13 14:09:48.911606 waagent[1550]: 2024-12-13T14:09:48.911540Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:09:48.916678 waagent[1550]: 2024-12-13T14:09:48.916618Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-478c04130c] Dec 13 14:09:48.956778 waagent[1550]: 2024-12-13T14:09:48.956651Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-478c04130c] Dec 13 14:09:48.963924 waagent[1550]: 2024-12-13T14:09:48.963844Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:09:48.970817 waagent[1550]: 2024-12-13T14:09:48.970755Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:09:48.987058 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:09:48.987222 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:09:48.987276 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:09:48.987516 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:09:48.992503 systemd-networkd[1222]: eth0: DHCPv6 lease lost Dec 13 14:09:48.994108 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:09:48.994296 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:09:48.996416 systemd[1]: Starting systemd-networkd.service... Dec 13 14:09:49.024246 systemd-networkd[1601]: enP54742s1: Link UP Dec 13 14:09:49.024258 systemd-networkd[1601]: enP54742s1: Gained carrier Dec 13 14:09:49.025102 systemd-networkd[1601]: eth0: Link UP Dec 13 14:09:49.025113 systemd-networkd[1601]: eth0: Gained carrier Dec 13 14:09:49.025407 systemd-networkd[1601]: lo: Link UP Dec 13 14:09:49.025416 systemd-networkd[1601]: lo: Gained carrier Dec 13 14:09:49.025720 systemd-networkd[1601]: eth0: Gained IPv6LL Dec 13 14:09:49.025921 systemd-networkd[1601]: Enumeration completed Dec 13 14:09:49.026019 systemd[1]: Started systemd-networkd.service. Dec 13 14:09:49.027769 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:09:49.033991 waagent[1550]: 2024-12-13T14:09:49.027975Z INFO Daemon Daemon Create user account if not exists Dec 13 14:09:49.034637 systemd-networkd[1601]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:09:49.035396 waagent[1550]: 2024-12-13T14:09:49.035315Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:09:49.041675 waagent[1550]: 2024-12-13T14:09:49.041596Z INFO Daemon Daemon Configure sudoer Dec 13 14:09:49.048612 waagent[1550]: 2024-12-13T14:09:49.048541Z INFO Daemon Daemon Configure sshd Dec 13 14:09:49.053480 waagent[1550]: 2024-12-13T14:09:49.053383Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:09:49.062553 systemd-networkd[1601]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:09:49.064683 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:09:49.907209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:09:49.907380 systemd[1]: Stopped kubelet.service. Dec 13 14:09:49.908746 systemd[1]: Starting kubelet.service... Dec 13 14:09:49.989514 systemd[1]: Started kubelet.service. Dec 13 14:09:50.058406 kubelet[1614]: E1213 14:09:50.058346 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:50.061347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:50.061494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:50.260148 waagent[1550]: 2024-12-13T14:09:50.259964Z INFO Daemon Daemon Provisioning complete Dec 13 14:09:50.281839 waagent[1550]: 2024-12-13T14:09:50.281772Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:09:50.289288 waagent[1550]: 2024-12-13T14:09:50.289143Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:09:50.301139 waagent[1550]: 2024-12-13T14:09:50.301068Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:09:50.609596 waagent[1621]: 2024-12-13T14:09:50.609432Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:09:50.610698 waagent[1621]: 2024-12-13T14:09:50.610644Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:50.610935 waagent[1621]: 2024-12-13T14:09:50.610888Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:50.630956 waagent[1621]: 2024-12-13T14:09:50.630866Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:09:50.631291 waagent[1621]: 2024-12-13T14:09:50.631244Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:09:50.717651 waagent[1621]: 2024-12-13T14:09:50.717375Z INFO ExtHandler ExtHandler Found private key matching thumbprint 051F7F177B84541EE31F9AEB037AD5D522D0082B Dec 13 14:09:50.718096 waagent[1621]: 2024-12-13T14:09:50.718041Z INFO ExtHandler ExtHandler Certificate with thumbprint E9F66D59FCF4B1564C12E43C47A3B91C6663F895 has no matching private key. Dec 13 14:09:50.718419 waagent[1621]: 2024-12-13T14:09:50.718371Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:09:50.737312 waagent[1621]: 2024-12-13T14:09:50.737256Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a7830b8f-c22c-4d6d-adf8-8e49a2e00442 New eTag: 1165144176566950372] Dec 13 14:09:50.738084 waagent[1621]: 2024-12-13T14:09:50.738028Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:09:50.853980 waagent[1621]: 2024-12-13T14:09:50.853815Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:09:50.864609 waagent[1621]: 2024-12-13T14:09:50.864484Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1621 Dec 13 14:09:50.868535 waagent[1621]: 2024-12-13T14:09:50.868466Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:09:50.870019 waagent[1621]: 2024-12-13T14:09:50.869960Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:09:50.988068 waagent[1621]: 2024-12-13T14:09:50.988009Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:09:50.988680 waagent[1621]: 2024-12-13T14:09:50.988626Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:09:50.996604 waagent[1621]: 2024-12-13T14:09:50.996550Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:09:50.997234 waagent[1621]: 2024-12-13T14:09:50.997180Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:09:50.998572 waagent[1621]: 2024-12-13T14:09:50.998509Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:09:51.000027 waagent[1621]: 2024-12-13T14:09:50.999960Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:09:51.000274 waagent[1621]: 2024-12-13T14:09:51.000203Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:51.000836 waagent[1621]: 2024-12-13T14:09:51.000762Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:51.001433 waagent[1621]: 2024-12-13T14:09:51.001367Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:09:51.001786 waagent[1621]: 2024-12-13T14:09:51.001725Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:09:51.001786 waagent[1621]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:09:51.001786 waagent[1621]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:09:51.001786 waagent[1621]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:09:51.001786 waagent[1621]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:51.001786 waagent[1621]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:51.001786 waagent[1621]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:51.004061 waagent[1621]: 2024-12-13T14:09:51.003898Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:09:51.004941 waagent[1621]: 2024-12-13T14:09:51.004866Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:51.005128 waagent[1621]: 2024-12-13T14:09:51.005072Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:51.005770 waagent[1621]: 2024-12-13T14:09:51.005696Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:09:51.005975 waagent[1621]: 2024-12-13T14:09:51.005909Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:09:51.006207 waagent[1621]: 2024-12-13T14:09:51.006142Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:09:51.006309 waagent[1621]: 2024-12-13T14:09:51.006245Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:09:51.006541 waagent[1621]: 2024-12-13T14:09:51.006480Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:09:51.007749 waagent[1621]: 2024-12-13T14:09:51.007435Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:09:51.007932 waagent[1621]: 2024-12-13T14:09:51.007861Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:09:51.009060 waagent[1621]: 2024-12-13T14:09:51.008992Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:09:51.022884 waagent[1621]: 2024-12-13T14:09:51.022813Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:09:51.023510 waagent[1621]: 2024-12-13T14:09:51.023430Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:09:51.024411 waagent[1621]: 2024-12-13T14:09:51.024353Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:09:51.052269 waagent[1621]: 2024-12-13T14:09:51.052093Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1601' Dec 13 14:09:51.111995 waagent[1621]: 2024-12-13T14:09:51.111920Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:09:51.145984 waagent[1621]: 2024-12-13T14:09:51.145793Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:09:51.145984 waagent[1621]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:09:51.145984 waagent[1621]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:09:51.145984 waagent[1621]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:06:ce brd ff:ff:ff:ff:ff:ff Dec 13 14:09:51.145984 waagent[1621]: 3: enP54742s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:06:ce brd ff:ff:ff:ff:ff:ff\ altname enP54742p0s2 Dec 13 14:09:51.145984 waagent[1621]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:09:51.145984 waagent[1621]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:09:51.145984 waagent[1621]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:09:51.145984 waagent[1621]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:09:51.145984 waagent[1621]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:09:51.145984 waagent[1621]: 2: eth0 inet6 fe80::222:48ff:fe7a:6ce/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:09:51.468067 waagent[1621]: 2024-12-13T14:09:51.467836Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 14:09:51.471241 waagent[1621]: 2024-12-13T14:09:51.471101Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 14:09:51.471241 waagent[1621]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:51.471241 waagent[1621]: pkts bytes target prot opt in out source destination Dec 13 14:09:51.471241 waagent[1621]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:51.471241 waagent[1621]: pkts bytes target prot opt in out source destination Dec 13 14:09:51.471241 waagent[1621]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:51.471241 waagent[1621]: pkts bytes target prot opt in out source destination Dec 13 14:09:51.471241 waagent[1621]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:09:51.471241 waagent[1621]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:09:51.472742 waagent[1621]: 2024-12-13T14:09:51.472690Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:09:52.233403 waagent[1621]: 2024-12-13T14:09:52.233334Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:09:52.304814 waagent[1550]: 2024-12-13T14:09:52.304695Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:09:52.311038 waagent[1550]: 2024-12-13T14:09:52.310980Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:09:53.529556 waagent[1660]: 2024-12-13T14:09:53.529435Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:09:53.530223 waagent[1660]: 2024-12-13T14:09:53.530158Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:09:53.530349 waagent[1660]: 2024-12-13T14:09:53.530302Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:09:53.530486 waagent[1660]: 2024-12-13T14:09:53.530427Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 13 14:09:53.538842 waagent[1660]: 2024-12-13T14:09:53.538722Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:09:53.539243 waagent[1660]: 2024-12-13T14:09:53.539183Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:53.539388 waagent[1660]: 2024-12-13T14:09:53.539342Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:53.553063 waagent[1660]: 2024-12-13T14:09:53.552991Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:09:53.565422 waagent[1660]: 2024-12-13T14:09:53.565359Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:09:53.566511 waagent[1660]: 2024-12-13T14:09:53.566432Z INFO ExtHandler Dec 13 14:09:53.566658 waagent[1660]: 2024-12-13T14:09:53.566608Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b1764627-55aa-4c75-a8b8-214cfbf632c7 eTag: 1165144176566950372 source: Fabric] Dec 13 14:09:53.567370 waagent[1660]: 2024-12-13T14:09:53.567314Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:09:53.568623 waagent[1660]: 2024-12-13T14:09:53.568562Z INFO ExtHandler Dec 13 14:09:53.568756 waagent[1660]: 2024-12-13T14:09:53.568710Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:09:53.575589 waagent[1660]: 2024-12-13T14:09:53.575539Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:09:53.576040 waagent[1660]: 2024-12-13T14:09:53.575991Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:09:53.599129 waagent[1660]: 2024-12-13T14:09:53.599068Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:09:53.671613 waagent[1660]: 2024-12-13T14:09:53.671471Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E9F66D59FCF4B1564C12E43C47A3B91C6663F895', 'hasPrivateKey': False} Dec 13 14:09:53.675680 waagent[1660]: 2024-12-13T14:09:53.675606Z INFO ExtHandler Downloaded certificate {'thumbprint': '051F7F177B84541EE31F9AEB037AD5D522D0082B', 'hasPrivateKey': True} Dec 13 14:09:53.676719 waagent[1660]: 2024-12-13T14:09:53.676658Z INFO ExtHandler Fetch goal state completed Dec 13 14:09:53.697307 waagent[1660]: 2024-12-13T14:09:53.697189Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:09:53.709897 waagent[1660]: 2024-12-13T14:09:53.709793Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1660 Dec 13 14:09:53.713160 waagent[1660]: 2024-12-13T14:09:53.713097Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:09:53.714244 waagent[1660]: 2024-12-13T14:09:53.714185Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:09:53.714564 waagent[1660]: 2024-12-13T14:09:53.714509Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:09:53.716725 waagent[1660]: 2024-12-13T14:09:53.716669Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:09:53.722051 waagent[1660]: 2024-12-13T14:09:53.721989Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:09:53.722477 waagent[1660]: 2024-12-13T14:09:53.722397Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:09:53.730278 waagent[1660]: 2024-12-13T14:09:53.730214Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:09:53.730818 waagent[1660]: 2024-12-13T14:09:53.730759Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:09:53.771123 waagent[1660]: 2024-12-13T14:09:53.770989Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 14:09:53.774360 waagent[1660]: 2024-12-13T14:09:53.774243Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 14:09:53.775542 waagent[1660]: 2024-12-13T14:09:53.775474Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:09:53.777163 waagent[1660]: 2024-12-13T14:09:53.777088Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:09:53.777873 waagent[1660]: 2024-12-13T14:09:53.777813Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:53.778127 waagent[1660]: 2024-12-13T14:09:53.778079Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:53.778831 waagent[1660]: 2024-12-13T14:09:53.778778Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:09:53.779215 waagent[1660]: 2024-12-13T14:09:53.779163Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:09:53.779215 waagent[1660]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:09:53.779215 waagent[1660]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:09:53.779215 waagent[1660]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:09:53.779215 waagent[1660]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:53.779215 waagent[1660]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:53.779215 waagent[1660]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:09:53.781956 waagent[1660]: 2024-12-13T14:09:53.781798Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:09:53.783485 waagent[1660]: 2024-12-13T14:09:53.782428Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:09:53.784803 waagent[1660]: 2024-12-13T14:09:53.784665Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:09:53.785696 waagent[1660]: 2024-12-13T14:09:53.785631Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:09:53.785854 waagent[1660]: 2024-12-13T14:09:53.785806Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:09:53.785970 waagent[1660]: 2024-12-13T14:09:53.785928Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:09:53.786829 waagent[1660]: 2024-12-13T14:09:53.786754Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:09:53.786995 waagent[1660]: 2024-12-13T14:09:53.786917Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:09:53.787878 waagent[1660]: 2024-12-13T14:09:53.787812Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:09:53.787988 waagent[1660]: 2024-12-13T14:09:53.787937Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:09:53.792454 waagent[1660]: 2024-12-13T14:09:53.792355Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:09:53.805305 waagent[1660]: 2024-12-13T14:09:53.804898Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:09:53.808570 waagent[1660]: 2024-12-13T14:09:53.808413Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:09:53.808570 waagent[1660]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:09:53.808570 waagent[1660]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:09:53.808570 waagent[1660]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:06:ce brd ff:ff:ff:ff:ff:ff Dec 13 14:09:53.808570 waagent[1660]: 3: enP54742s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:06:ce brd ff:ff:ff:ff:ff:ff\ altname enP54742p0s2 Dec 13 14:09:53.808570 waagent[1660]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:09:53.808570 waagent[1660]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:09:53.808570 waagent[1660]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:09:53.808570 waagent[1660]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:09:53.808570 waagent[1660]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:09:53.808570 waagent[1660]: 2: eth0 inet6 fe80::222:48ff:fe7a:6ce/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:09:53.832188 waagent[1660]: 2024-12-13T14:09:53.832094Z INFO ExtHandler ExtHandler Dec 13 14:09:53.833321 waagent[1660]: 2024-12-13T14:09:53.833253Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 24e55d1c-79f0-4755-957f-92ea5c46aadf correlation 30398d7e-130c-483b-b75e-7ef8e4f3232e created: 2024-12-13T14:08:21.165283Z] Dec 13 14:09:53.836726 waagent[1660]: 2024-12-13T14:09:53.836646Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:09:53.841416 waagent[1660]: 2024-12-13T14:09:53.841336Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Dec 13 14:09:53.865487 waagent[1660]: 2024-12-13T14:09:53.865394Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:09:53.900878 waagent[1660]: 2024-12-13T14:09:53.900675Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C13548AC-DC28-4019-9BC7-42F58C21DE01;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:09:53.914884 waagent[1660]: 2024-12-13T14:09:53.914762Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:09:53.914884 waagent[1660]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:53.914884 waagent[1660]: pkts bytes target prot opt in out source destination Dec 13 14:09:53.914884 waagent[1660]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:53.914884 waagent[1660]: pkts bytes target prot opt in out source destination Dec 13 14:09:53.914884 waagent[1660]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:09:53.914884 waagent[1660]: pkts bytes target prot opt in out source destination Dec 13 14:09:53.914884 waagent[1660]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:09:53.914884 waagent[1660]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:09:53.914884 waagent[1660]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:10:00.294326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:10:00.294528 systemd[1]: Stopped kubelet.service. Dec 13 14:10:00.295846 systemd[1]: Starting kubelet.service... Dec 13 14:10:00.413366 systemd[1]: Started kubelet.service. Dec 13 14:10:00.454033 kubelet[1704]: E1213 14:10:00.453997 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:00.455839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:00.455956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:10.544548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:10:10.544721 systemd[1]: Stopped kubelet.service. Dec 13 14:10:10.546075 systemd[1]: Starting kubelet.service... Dec 13 14:10:10.757653 systemd[1]: Started kubelet.service. Dec 13 14:10:10.792727 kubelet[1716]: E1213 14:10:10.792669 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:10.795049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:10.795176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:18.307090 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 14:10:21.044355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:10:21.044656 systemd[1]: Stopped kubelet.service. Dec 13 14:10:21.046252 systemd[1]: Starting kubelet.service... Dec 13 14:10:21.198750 systemd[1]: Started kubelet.service. Dec 13 14:10:21.235208 kubelet[1728]: E1213 14:10:21.235154 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:21.237565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:21.237683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:24.017543 update_engine[1442]: I1213 14:10:24.017498 1442 update_attempter.cc:509] Updating boot flags... Dec 13 14:10:28.263936 systemd[1]: Created slice system-sshd.slice. Dec 13 14:10:28.265337 systemd[1]: Started sshd@0-10.200.20.43:22-10.200.16.10:48922.service. Dec 13 14:10:28.855555 sshd[1801]: Accepted publickey for core from 10.200.16.10 port 48922 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:28.871937 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:28.876179 systemd[1]: Started session-3.scope. Dec 13 14:10:28.877219 systemd-logind[1437]: New session 3 of user core. Dec 13 14:10:29.244007 systemd[1]: Started sshd@1-10.200.20.43:22-10.200.16.10:43122.service. Dec 13 14:10:29.662497 sshd[1806]: Accepted publickey for core from 10.200.16.10 port 43122 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:29.664376 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:29.668279 systemd[1]: Started session-4.scope. Dec 13 14:10:29.668759 systemd-logind[1437]: New session 4 of user core. Dec 13 14:10:29.975005 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:29.978027 systemd[1]: sshd@1-10.200.20.43:22-10.200.16.10:43122.service: Deactivated successfully. Dec 13 14:10:29.978190 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:10:29.978674 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:10:29.979579 systemd-logind[1437]: Removed session 4. Dec 13 14:10:30.041813 systemd[1]: Started sshd@2-10.200.20.43:22-10.200.16.10:43134.service. Dec 13 14:10:30.454226 sshd[1812]: Accepted publickey for core from 10.200.16.10 port 43134 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:30.455847 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:30.459495 systemd-logind[1437]: New session 5 of user core. Dec 13 14:10:30.459912 systemd[1]: Started session-5.scope. Dec 13 14:10:30.769608 sshd[1812]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:30.772378 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:10:30.772547 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:10:30.773225 systemd[1]: sshd@2-10.200.20.43:22-10.200.16.10:43134.service: Deactivated successfully. Dec 13 14:10:30.774246 systemd-logind[1437]: Removed session 5. Dec 13 14:10:30.842410 systemd[1]: Started sshd@3-10.200.20.43:22-10.200.16.10:43142.service. Dec 13 14:10:31.268801 sshd[1821]: Accepted publickey for core from 10.200.16.10 port 43142 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:31.270344 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:31.271077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:10:31.271282 systemd[1]: Stopped kubelet.service. Dec 13 14:10:31.272571 systemd[1]: Starting kubelet.service... Dec 13 14:10:31.275596 systemd[1]: Started session-6.scope. Dec 13 14:10:31.276508 systemd-logind[1437]: New session 6 of user core. Dec 13 14:10:31.552155 systemd[1]: Started kubelet.service. Dec 13 14:10:31.593640 sshd[1821]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:31.595473 kubelet[1829]: E1213 14:10:31.594638 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:31.596024 systemd[1]: sshd@3-10.200.20.43:22-10.200.16.10:43142.service: Deactivated successfully. Dec 13 14:10:31.596719 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:10:31.597407 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:10:31.598059 systemd-logind[1437]: Removed session 6. Dec 13 14:10:31.599549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:31.599661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:31.661266 systemd[1]: Started sshd@4-10.200.20.43:22-10.200.16.10:43144.service. Dec 13 14:10:32.081197 sshd[1837]: Accepted publickey for core from 10.200.16.10 port 43144 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:32.082829 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:32.086857 systemd[1]: Started session-7.scope. Dec 13 14:10:32.087355 systemd-logind[1437]: New session 7 of user core. Dec 13 14:10:32.560531 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:10:32.560737 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:10:32.572734 systemd[1]: Starting coreos-metadata.service... Dec 13 14:10:32.648489 coreos-metadata[1844]: Dec 13 14:10:32.648 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:10:32.651746 coreos-metadata[1844]: Dec 13 14:10:32.651 INFO Fetch successful Dec 13 14:10:32.651906 coreos-metadata[1844]: Dec 13 14:10:32.651 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 14:10:32.653797 coreos-metadata[1844]: Dec 13 14:10:32.653 INFO Fetch successful Dec 13 14:10:32.654127 coreos-metadata[1844]: Dec 13 14:10:32.654 INFO Fetching http://168.63.129.16/machine/ed831806-bca0-43fa-a925-76b53e1fa0b0/74dc680b%2D0e24%2D4e7f%2D86ad%2D2bd8beaeed76.%5Fci%2D3510.3.6%2Da%2D478c04130c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 14:10:32.656147 coreos-metadata[1844]: Dec 13 14:10:32.656 INFO Fetch successful Dec 13 14:10:32.689560 coreos-metadata[1844]: Dec 13 14:10:32.689 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:10:32.702279 coreos-metadata[1844]: Dec 13 14:10:32.702 INFO Fetch successful Dec 13 14:10:32.710966 systemd[1]: Finished coreos-metadata.service. Dec 13 14:10:36.117804 systemd[1]: Stopped kubelet.service. Dec 13 14:10:36.120406 systemd[1]: Starting kubelet.service... Dec 13 14:10:36.141615 systemd[1]: Reloading. Dec 13 14:10:36.209522 /usr/lib/systemd/system-generators/torcx-generator[1905]: time="2024-12-13T14:10:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:10:36.209552 /usr/lib/systemd/system-generators/torcx-generator[1905]: time="2024-12-13T14:10:36Z" level=info msg="torcx already run" Dec 13 14:10:36.283889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:10:36.283908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:10:36.299179 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:10:36.387385 systemd[1]: Started kubelet.service. Dec 13 14:10:36.389330 systemd[1]: Stopping kubelet.service... Dec 13 14:10:36.390002 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:10:36.390432 systemd[1]: Stopped kubelet.service. Dec 13 14:10:36.392140 systemd[1]: Starting kubelet.service... Dec 13 14:10:36.539404 systemd[1]: Started kubelet.service. Dec 13 14:10:36.574775 kubelet[1969]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:36.574775 kubelet[1969]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:10:36.574775 kubelet[1969]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:36.575113 kubelet[1969]: I1213 14:10:36.574814 1969 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:10:37.223425 kubelet[1969]: I1213 14:10:37.223394 1969 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:10:37.223637 kubelet[1969]: I1213 14:10:37.223626 1969 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:10:37.223906 kubelet[1969]: I1213 14:10:37.223893 1969 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:10:37.239846 kubelet[1969]: I1213 14:10:37.239809 1969 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:10:37.247594 kubelet[1969]: I1213 14:10:37.247565 1969 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:10:37.248817 kubelet[1969]: I1213 14:10:37.248777 1969 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:10:37.248982 kubelet[1969]: I1213 14:10:37.248821 1969 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:10:37.249078 kubelet[1969]: I1213 14:10:37.248986 1969 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:10:37.249078 kubelet[1969]: I1213 14:10:37.248994 1969 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:10:37.249849 kubelet[1969]: I1213 14:10:37.249832 1969 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:37.250609 kubelet[1969]: I1213 14:10:37.250584 1969 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:10:37.250609 kubelet[1969]: I1213 14:10:37.250605 1969 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:10:37.250706 kubelet[1969]: I1213 14:10:37.250634 1969 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:10:37.250706 kubelet[1969]: I1213 14:10:37.250652 1969 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:10:37.251068 kubelet[1969]: E1213 14:10:37.251050 1969 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:37.251232 kubelet[1969]: E1213 14:10:37.251217 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:37.251394 kubelet[1969]: I1213 14:10:37.251358 1969 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:10:37.251616 kubelet[1969]: I1213 14:10:37.251601 1969 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:10:37.251671 kubelet[1969]: W1213 14:10:37.251642 1969 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:10:37.252119 kubelet[1969]: I1213 14:10:37.252100 1969 server.go:1264] "Started kubelet" Dec 13 14:10:37.257690 kubelet[1969]: I1213 14:10:37.257643 1969 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:10:37.258071 kubelet[1969]: I1213 14:10:37.258010 1969 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:10:37.258319 kubelet[1969]: I1213 14:10:37.258287 1969 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:10:37.258901 kubelet[1969]: I1213 14:10:37.258883 1969 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:10:37.268001 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:10:37.268424 kubelet[1969]: I1213 14:10:37.268400 1969 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:10:37.271667 kubelet[1969]: E1213 14:10:37.271647 1969 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:10:37.271891 kubelet[1969]: W1213 14:10:37.271873 1969 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:10:37.271977 kubelet[1969]: E1213 14:10:37.271966 1969 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:10:37.272287 kubelet[1969]: W1213 14:10:37.272269 1969 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:10:37.272388 kubelet[1969]: E1213 14:10:37.272376 1969 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:10:37.272654 kubelet[1969]: E1213 14:10:37.272564 1969 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.43.1810c1e300c1cff8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.43,UID:10.200.20.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.43,},FirstTimestamp:2024-12-13 14:10:37.252079608 +0000 UTC m=+0.708416769,LastTimestamp:2024-12-13 14:10:37.252079608 +0000 UTC m=+0.708416769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.43,}" Dec 13 14:10:37.273900 kubelet[1969]: I1213 14:10:37.273870 1969 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:10:37.274244 kubelet[1969]: I1213 14:10:37.274211 1969 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:10:37.274476 kubelet[1969]: I1213 14:10:37.272812 1969 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:10:37.275147 kubelet[1969]: I1213 14:10:37.272828 1969 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:10:37.276822 kubelet[1969]: I1213 14:10:37.276805 1969 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:10:37.277410 kubelet[1969]: E1213 14:10:37.277333 1969 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.43.1810c1e301ec3aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.43,UID:10.200.20.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.200.20.43,},FirstTimestamp:2024-12-13 14:10:37.271636644 +0000 UTC m=+0.727973925,LastTimestamp:2024-12-13 14:10:37.271636644 +0000 UTC m=+0.727973925,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.43,}" Dec 13 14:10:37.277631 kubelet[1969]: I1213 14:10:37.276920 1969 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:10:37.290414 kubelet[1969]: W1213 14:10:37.290379 1969 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:10:37.290584 kubelet[1969]: E1213 14:10:37.290418 1969 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:10:37.290962 kubelet[1969]: E1213 14:10:37.290933 1969 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.43\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:10:37.294683 kubelet[1969]: I1213 14:10:37.294667 1969 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:10:37.294777 kubelet[1969]: I1213 14:10:37.294766 1969 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:10:37.294900 kubelet[1969]: I1213 14:10:37.294876 1969 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:37.302734 kubelet[1969]: I1213 14:10:37.302719 1969 policy_none.go:49] "None policy: Start" Dec 13 14:10:37.303379 kubelet[1969]: I1213 14:10:37.303366 1969 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:10:37.303493 kubelet[1969]: I1213 14:10:37.303483 1969 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:10:37.311864 systemd[1]: Created slice kubepods.slice. Dec 13 14:10:37.316486 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:10:37.319420 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:10:37.328057 kubelet[1969]: I1213 14:10:37.328023 1969 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:10:37.328237 kubelet[1969]: I1213 14:10:37.328195 1969 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:10:37.328313 kubelet[1969]: I1213 14:10:37.328298 1969 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:10:37.331812 kubelet[1969]: E1213 14:10:37.331692 1969 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.43\" not found" Dec 13 14:10:37.353458 kubelet[1969]: I1213 14:10:37.352973 1969 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:10:37.353912 kubelet[1969]: I1213 14:10:37.353883 1969 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:10:37.353912 kubelet[1969]: I1213 14:10:37.353916 1969 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:10:37.354010 kubelet[1969]: I1213 14:10:37.353935 1969 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:10:37.354010 kubelet[1969]: E1213 14:10:37.353989 1969 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:10:37.374099 kubelet[1969]: I1213 14:10:37.374072 1969 kubelet_node_status.go:73] "Attempting to register node" node="10.200.20.43" Dec 13 14:10:37.378295 kubelet[1969]: I1213 14:10:37.378277 1969 kubelet_node_status.go:76] "Successfully registered node" node="10.200.20.43" Dec 13 14:10:37.392266 kubelet[1969]: E1213 14:10:37.392236 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.492470 kubelet[1969]: E1213 14:10:37.492349 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.565339 sudo[1840]: pam_unix(sudo:session): session closed for user root Dec 13 14:10:37.592684 kubelet[1969]: E1213 14:10:37.592651 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.662560 sshd[1837]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:37.665627 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:10:37.666169 systemd[1]: sshd@4-10.200.20.43:22-10.200.16.10:43144.service: Deactivated successfully. Dec 13 14:10:37.667071 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:10:37.668078 systemd-logind[1437]: Removed session 7. Dec 13 14:10:37.693269 kubelet[1969]: E1213 14:10:37.693239 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.794174 kubelet[1969]: E1213 14:10:37.793980 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.894645 kubelet[1969]: E1213 14:10:37.894619 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:37.995222 kubelet[1969]: E1213 14:10:37.995204 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.095870 kubelet[1969]: E1213 14:10:38.095807 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.196454 kubelet[1969]: E1213 14:10:38.196422 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.225653 kubelet[1969]: I1213 14:10:38.225626 1969 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:10:38.225853 kubelet[1969]: W1213 14:10:38.225820 1969 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:10:38.252151 kubelet[1969]: E1213 14:10:38.252133 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:38.297252 kubelet[1969]: E1213 14:10:38.297210 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.398420 kubelet[1969]: E1213 14:10:38.398008 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.498854 kubelet[1969]: E1213 14:10:38.498823 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.599213 kubelet[1969]: E1213 14:10:38.599185 1969 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.20.43\" not found" Dec 13 14:10:38.700954 kubelet[1969]: I1213 14:10:38.700604 1969 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:10:38.701068 env[1449]: time="2024-12-13T14:10:38.700886727Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:10:38.701284 kubelet[1969]: I1213 14:10:38.701202 1969 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:10:39.251464 kubelet[1969]: I1213 14:10:39.251423 1969 apiserver.go:52] "Watching apiserver" Dec 13 14:10:39.252507 kubelet[1969]: E1213 14:10:39.252483 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:39.255115 kubelet[1969]: I1213 14:10:39.255087 1969 topology_manager.go:215] "Topology Admit Handler" podUID="87648535-e891-4c04-876d-fe173b27df39" podNamespace="kube-system" podName="cilium-dg2zz" Dec 13 14:10:39.255233 kubelet[1969]: I1213 14:10:39.255210 1969 topology_manager.go:215] "Topology Admit Handler" podUID="4dc4aa51-ca36-47e8-969a-acbfa473fae6" podNamespace="kube-system" podName="kube-proxy-mp8ql" Dec 13 14:10:39.259726 systemd[1]: Created slice kubepods-besteffort-pod4dc4aa51_ca36_47e8_969a_acbfa473fae6.slice. Dec 13 14:10:39.272110 systemd[1]: Created slice kubepods-burstable-pod87648535_e891_4c04_876d_fe173b27df39.slice. Dec 13 14:10:39.277081 kubelet[1969]: I1213 14:10:39.277055 1969 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:10:39.289202 kubelet[1969]: I1213 14:10:39.289156 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dc4aa51-ca36-47e8-969a-acbfa473fae6-kube-proxy\") pod \"kube-proxy-mp8ql\" (UID: \"4dc4aa51-ca36-47e8-969a-acbfa473fae6\") " pod="kube-system/kube-proxy-mp8ql" Dec 13 14:10:39.289297 kubelet[1969]: I1213 14:10:39.289210 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxng\" (UniqueName: \"kubernetes.io/projected/4dc4aa51-ca36-47e8-969a-acbfa473fae6-kube-api-access-8nxng\") pod \"kube-proxy-mp8ql\" (UID: \"4dc4aa51-ca36-47e8-969a-acbfa473fae6\") " pod="kube-system/kube-proxy-mp8ql" Dec 13 14:10:39.289297 kubelet[1969]: I1213 14:10:39.289229 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87648535-e891-4c04-876d-fe173b27df39-cilium-config-path\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289297 kubelet[1969]: I1213 14:10:39.289244 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-hubble-tls\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289297 kubelet[1969]: I1213 14:10:39.289264 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-etc-cni-netd\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289297 kubelet[1969]: I1213 14:10:39.289278 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-xtables-lock\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289291 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m2zw\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-kube-api-access-2m2zw\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289311 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-run\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289326 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-hostproc\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289340 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-kernel\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289354 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dc4aa51-ca36-47e8-969a-acbfa473fae6-xtables-lock\") pod \"kube-proxy-mp8ql\" (UID: \"4dc4aa51-ca36-47e8-969a-acbfa473fae6\") " pod="kube-system/kube-proxy-mp8ql" Dec 13 14:10:39.289422 kubelet[1969]: I1213 14:10:39.289368 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dc4aa51-ca36-47e8-969a-acbfa473fae6-lib-modules\") pod \"kube-proxy-mp8ql\" (UID: \"4dc4aa51-ca36-47e8-969a-acbfa473fae6\") " pod="kube-system/kube-proxy-mp8ql" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289383 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-bpf-maps\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289402 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-cgroup\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289418 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cni-path\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289433 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-lib-modules\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289475 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87648535-e891-4c04-876d-fe173b27df39-clustermesh-secrets\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.289577 kubelet[1969]: I1213 14:10:39.289490 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-net\") pod \"cilium-dg2zz\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " pod="kube-system/cilium-dg2zz" Dec 13 14:10:39.571532 env[1449]: time="2024-12-13T14:10:39.571483205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mp8ql,Uid:4dc4aa51-ca36-47e8-969a-acbfa473fae6,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:39.580484 env[1449]: time="2024-12-13T14:10:39.580340495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg2zz,Uid:87648535-e891-4c04-876d-fe173b27df39,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:40.253226 kubelet[1969]: E1213 14:10:40.253189 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:41.253584 kubelet[1969]: E1213 14:10:41.253546 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:41.861424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344158201.mount: Deactivated successfully. Dec 13 14:10:41.884848 env[1449]: time="2024-12-13T14:10:41.884795185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.893572 env[1449]: time="2024-12-13T14:10:41.893539519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.896953 env[1449]: time="2024-12-13T14:10:41.896917334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.905628 env[1449]: time="2024-12-13T14:10:41.905601548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.907891 env[1449]: time="2024-12-13T14:10:41.907855131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.911205 env[1449]: time="2024-12-13T14:10:41.911178026Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.922353 env[1449]: time="2024-12-13T14:10:41.922313422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:41.922974 env[1449]: time="2024-12-13T14:10:41.922945378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:42.062910 env[1449]: time="2024-12-13T14:10:42.062834416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:42.062910 env[1449]: time="2024-12-13T14:10:42.062874136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:42.063628 env[1449]: time="2024-12-13T14:10:42.063212413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:42.063628 env[1449]: time="2024-12-13T14:10:42.063244893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:42.063628 env[1449]: time="2024-12-13T14:10:42.063255693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:42.063628 env[1449]: time="2024-12-13T14:10:42.063456372Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a pid=2022 runtime=io.containerd.runc.v2 Dec 13 14:10:42.063803 env[1449]: time="2024-12-13T14:10:42.062891096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:42.063803 env[1449]: time="2024-12-13T14:10:42.063463972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/643ff2eb9723932956dcfbadc5ed4be4059634496f4a6f500fc9fe3f8a80c658 pid=2024 runtime=io.containerd.runc.v2 Dec 13 14:10:42.080363 systemd[1]: Started cri-containerd-1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a.scope. Dec 13 14:10:42.088812 systemd[1]: Started cri-containerd-643ff2eb9723932956dcfbadc5ed4be4059634496f4a6f500fc9fe3f8a80c658.scope. Dec 13 14:10:42.126776 env[1449]: time="2024-12-13T14:10:42.126246552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg2zz,Uid:87648535-e891-4c04-876d-fe173b27df39,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\"" Dec 13 14:10:42.129938 env[1449]: time="2024-12-13T14:10:42.129900845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:10:42.130163 env[1449]: time="2024-12-13T14:10:42.130138203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mp8ql,Uid:4dc4aa51-ca36-47e8-969a-acbfa473fae6,Namespace:kube-system,Attempt:0,} returns sandbox id \"643ff2eb9723932956dcfbadc5ed4be4059634496f4a6f500fc9fe3f8a80c658\"" Dec 13 14:10:42.254410 kubelet[1969]: E1213 14:10:42.254352 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:43.254481 kubelet[1969]: E1213 14:10:43.254429 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:44.255432 kubelet[1969]: E1213 14:10:44.255386 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:45.256598 kubelet[1969]: E1213 14:10:45.256559 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:46.257184 kubelet[1969]: E1213 14:10:46.257144 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:46.571376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161774628.mount: Deactivated successfully. Dec 13 14:10:47.257526 kubelet[1969]: E1213 14:10:47.257490 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:48.258655 kubelet[1969]: E1213 14:10:48.258574 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:48.835233 env[1449]: time="2024-12-13T14:10:48.835162686Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:48.841840 env[1449]: time="2024-12-13T14:10:48.841807405Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:48.846459 env[1449]: time="2024-12-13T14:10:48.846420536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:48.847168 env[1449]: time="2024-12-13T14:10:48.847141332Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:10:48.849560 env[1449]: time="2024-12-13T14:10:48.849534997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:10:48.855069 env[1449]: time="2024-12-13T14:10:48.855016203Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:10:48.878215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944539124.mount: Deactivated successfully. Dec 13 14:10:48.884106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545518815.mount: Deactivated successfully. Dec 13 14:10:48.907325 env[1449]: time="2024-12-13T14:10:48.907270398Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\"" Dec 13 14:10:48.908095 env[1449]: time="2024-12-13T14:10:48.908070633Z" level=info msg="StartContainer for \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\"" Dec 13 14:10:48.925351 systemd[1]: Started cri-containerd-feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961.scope. Dec 13 14:10:48.961551 env[1449]: time="2024-12-13T14:10:48.961500101Z" level=info msg="StartContainer for \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\" returns successfully" Dec 13 14:10:48.965483 systemd[1]: cri-containerd-feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961.scope: Deactivated successfully. Dec 13 14:10:49.259788 kubelet[1969]: E1213 14:10:49.259208 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:49.876478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961-rootfs.mount: Deactivated successfully. Dec 13 14:10:50.260192 kubelet[1969]: E1213 14:10:50.260082 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:50.927997 env[1449]: time="2024-12-13T14:10:50.927955142Z" level=info msg="shim disconnected" id=feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961 Dec 13 14:10:50.928341 env[1449]: time="2024-12-13T14:10:50.928322780Z" level=warning msg="cleaning up after shim disconnected" id=feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961 namespace=k8s.io Dec 13 14:10:50.928436 env[1449]: time="2024-12-13T14:10:50.928422180Z" level=info msg="cleaning up dead shim" Dec 13 14:10:50.935564 env[1449]: time="2024-12-13T14:10:50.935526298Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2139 runtime=io.containerd.runc.v2\n" Dec 13 14:10:51.260590 kubelet[1969]: E1213 14:10:51.260494 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:51.379222 env[1449]: time="2024-12-13T14:10:51.379178742Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:10:51.404337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968157920.mount: Deactivated successfully. Dec 13 14:10:51.424430 env[1449]: time="2024-12-13T14:10:51.424388243Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\"" Dec 13 14:10:51.425249 env[1449]: time="2024-12-13T14:10:51.425190638Z" level=info msg="StartContainer for \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\"" Dec 13 14:10:51.461309 systemd[1]: Started cri-containerd-66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f.scope. Dec 13 14:10:51.496883 env[1449]: time="2024-12-13T14:10:51.496840907Z" level=info msg="StartContainer for \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\" returns successfully" Dec 13 14:10:51.504623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:10:51.504817 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:10:51.504988 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:10:51.506817 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:10:51.510683 systemd[1]: cri-containerd-66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f.scope: Deactivated successfully. Dec 13 14:10:51.519164 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:10:51.597533 env[1449]: time="2024-12-13T14:10:51.597481970Z" level=info msg="shim disconnected" id=66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f Dec 13 14:10:51.597533 env[1449]: time="2024-12-13T14:10:51.597525330Z" level=warning msg="cleaning up after shim disconnected" id=66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f namespace=k8s.io Dec 13 14:10:51.597533 env[1449]: time="2024-12-13T14:10:51.597535010Z" level=info msg="cleaning up dead shim" Dec 13 14:10:51.638108 env[1449]: time="2024-12-13T14:10:51.638060297Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2200 runtime=io.containerd.runc.v2\n" Dec 13 14:10:52.261513 kubelet[1969]: E1213 14:10:52.261474 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:52.344559 env[1449]: time="2024-12-13T14:10:52.344511096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:52.353661 env[1449]: time="2024-12-13T14:10:52.353624645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:52.356345 env[1449]: time="2024-12-13T14:10:52.356321630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:52.360322 env[1449]: time="2024-12-13T14:10:52.360298608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:10:52.360699 env[1449]: time="2024-12-13T14:10:52.360674286Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 14:10:52.363057 env[1449]: time="2024-12-13T14:10:52.363029393Z" level=info msg="CreateContainer within sandbox \"643ff2eb9723932956dcfbadc5ed4be4059634496f4a6f500fc9fe3f8a80c658\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:10:52.384257 env[1449]: time="2024-12-13T14:10:52.384214114Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:10:52.395452 env[1449]: time="2024-12-13T14:10:52.395401932Z" level=info msg="CreateContainer within sandbox \"643ff2eb9723932956dcfbadc5ed4be4059634496f4a6f500fc9fe3f8a80c658\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef37bb3142ee8a457c3c081ef48cef7a2deaaa3f7b7231795e600041bb7bb1d6\"" Dec 13 14:10:52.397732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f-rootfs.mount: Deactivated successfully. Dec 13 14:10:52.399129 env[1449]: time="2024-12-13T14:10:52.398893392Z" level=info msg="StartContainer for \"ef37bb3142ee8a457c3c081ef48cef7a2deaaa3f7b7231795e600041bb7bb1d6\"" Dec 13 14:10:52.425028 systemd[1]: Started cri-containerd-ef37bb3142ee8a457c3c081ef48cef7a2deaaa3f7b7231795e600041bb7bb1d6.scope. Dec 13 14:10:52.437311 env[1449]: time="2024-12-13T14:10:52.437254178Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\"" Dec 13 14:10:52.438222 env[1449]: time="2024-12-13T14:10:52.438181733Z" level=info msg="StartContainer for \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\"" Dec 13 14:10:52.458775 systemd[1]: Started cri-containerd-e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c.scope. Dec 13 14:10:52.487270 env[1449]: time="2024-12-13T14:10:52.487206659Z" level=info msg="StartContainer for \"ef37bb3142ee8a457c3c081ef48cef7a2deaaa3f7b7231795e600041bb7bb1d6\" returns successfully" Dec 13 14:10:52.512157 systemd[1]: cri-containerd-e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c.scope: Deactivated successfully. Dec 13 14:10:52.513596 env[1449]: time="2024-12-13T14:10:52.513434352Z" level=info msg="StartContainer for \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\" returns successfully" Dec 13 14:10:53.044035 env[1449]: time="2024-12-13T14:10:53.043985154Z" level=info msg="shim disconnected" id=e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c Dec 13 14:10:53.044035 env[1449]: time="2024-12-13T14:10:53.044031474Z" level=warning msg="cleaning up after shim disconnected" id=e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c namespace=k8s.io Dec 13 14:10:53.044035 env[1449]: time="2024-12-13T14:10:53.044041634Z" level=info msg="cleaning up dead shim" Dec 13 14:10:53.057679 env[1449]: time="2024-12-13T14:10:53.057631560Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2357 runtime=io.containerd.runc.v2\n" Dec 13 14:10:53.262416 kubelet[1969]: E1213 14:10:53.262361 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:53.386694 env[1449]: time="2024-12-13T14:10:53.386601489Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:10:53.397818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408852532.mount: Deactivated successfully. Dec 13 14:10:53.399939 kubelet[1969]: I1213 14:10:53.399882 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mp8ql" podStartSLOduration=6.169246774 podStartE2EDuration="16.399866337s" podCreationTimestamp="2024-12-13 14:10:37 +0000 UTC" firstStartedPulling="2024-12-13 14:10:42.131038717 +0000 UTC m=+5.587375918" lastFinishedPulling="2024-12-13 14:10:52.36165828 +0000 UTC m=+15.817995481" observedRunningTime="2024-12-13 14:10:53.399495819 +0000 UTC m=+16.855833020" watchObservedRunningTime="2024-12-13 14:10:53.399866337 +0000 UTC m=+16.856203538" Dec 13 14:10:53.410978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066112510.mount: Deactivated successfully. Dec 13 14:10:53.416869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106496300.mount: Deactivated successfully. Dec 13 14:10:53.431239 env[1449]: time="2024-12-13T14:10:53.431191367Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\"" Dec 13 14:10:53.431894 env[1449]: time="2024-12-13T14:10:53.431871083Z" level=info msg="StartContainer for \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\"" Dec 13 14:10:53.445341 systemd[1]: Started cri-containerd-7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc.scope. Dec 13 14:10:53.472285 systemd[1]: cri-containerd-7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc.scope: Deactivated successfully. Dec 13 14:10:53.474174 env[1449]: time="2024-12-13T14:10:53.474106453Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87648535_e891_4c04_876d_fe173b27df39.slice/cri-containerd-7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc.scope/memory.events\": no such file or directory" Dec 13 14:10:53.479057 env[1449]: time="2024-12-13T14:10:53.479015067Z" level=info msg="StartContainer for \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\" returns successfully" Dec 13 14:10:53.510305 env[1449]: time="2024-12-13T14:10:53.510257897Z" level=info msg="shim disconnected" id=7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc Dec 13 14:10:53.510707 env[1449]: time="2024-12-13T14:10:53.510679334Z" level=warning msg="cleaning up after shim disconnected" id=7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc namespace=k8s.io Dec 13 14:10:53.510808 env[1449]: time="2024-12-13T14:10:53.510793854Z" level=info msg="cleaning up dead shim" Dec 13 14:10:53.518315 env[1449]: time="2024-12-13T14:10:53.518275813Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2476 runtime=io.containerd.runc.v2\n" Dec 13 14:10:54.263395 kubelet[1969]: E1213 14:10:54.263344 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:54.391226 env[1449]: time="2024-12-13T14:10:54.391160638Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:10:54.419765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282638831.mount: Deactivated successfully. Dec 13 14:10:54.424838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533701602.mount: Deactivated successfully. Dec 13 14:10:54.440421 env[1449]: time="2024-12-13T14:10:54.440372817Z" level=info msg="CreateContainer within sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\"" Dec 13 14:10:54.441406 env[1449]: time="2024-12-13T14:10:54.441378012Z" level=info msg="StartContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\"" Dec 13 14:10:54.456258 systemd[1]: Started cri-containerd-918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701.scope. Dec 13 14:10:54.493743 env[1449]: time="2024-12-13T14:10:54.493666335Z" level=info msg="StartContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" returns successfully" Dec 13 14:10:54.573484 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:10:54.574214 kubelet[1969]: I1213 14:10:54.574009 1969 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:10:55.105472 kernel: Initializing XFRM netlink socket Dec 13 14:10:55.114475 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:10:55.264217 kubelet[1969]: E1213 14:10:55.264170 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:55.408398 kubelet[1969]: I1213 14:10:55.408282 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dg2zz" podStartSLOduration=11.688515912 podStartE2EDuration="18.408263102s" podCreationTimestamp="2024-12-13 14:10:37 +0000 UTC" firstStartedPulling="2024-12-13 14:10:42.129072571 +0000 UTC m=+5.585409772" lastFinishedPulling="2024-12-13 14:10:48.848819761 +0000 UTC m=+12.305156962" observedRunningTime="2024-12-13 14:10:55.408232463 +0000 UTC m=+18.864569664" watchObservedRunningTime="2024-12-13 14:10:55.408263102 +0000 UTC m=+18.864600303" Dec 13 14:10:56.265091 kubelet[1969]: E1213 14:10:56.265060 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:56.754302 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:10:56.755660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:10:56.741793 systemd-networkd[1601]: cilium_host: Link UP Dec 13 14:10:56.742065 systemd-networkd[1601]: cilium_net: Link UP Dec 13 14:10:56.759020 systemd-networkd[1601]: cilium_net: Gained carrier Dec 13 14:10:56.759937 systemd-networkd[1601]: cilium_host: Gained carrier Dec 13 14:10:56.893914 systemd-networkd[1601]: cilium_vxlan: Link UP Dec 13 14:10:56.893921 systemd-networkd[1601]: cilium_vxlan: Gained carrier Dec 13 14:10:56.938278 kubelet[1969]: I1213 14:10:56.938232 1969 topology_manager.go:215] "Topology Admit Handler" podUID="8f0fb905-356a-42ec-8fa9-f58b73eafd09" podNamespace="default" podName="nginx-deployment-85f456d6dd-hmlvv" Dec 13 14:10:56.943979 systemd[1]: Created slice kubepods-besteffort-pod8f0fb905_356a_42ec_8fa9_f58b73eafd09.slice. Dec 13 14:10:56.988015 kubelet[1969]: I1213 14:10:56.987938 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hcpx\" (UniqueName: \"kubernetes.io/projected/8f0fb905-356a-42ec-8fa9-f58b73eafd09-kube-api-access-7hcpx\") pod \"nginx-deployment-85f456d6dd-hmlvv\" (UID: \"8f0fb905-356a-42ec-8fa9-f58b73eafd09\") " pod="default/nginx-deployment-85f456d6dd-hmlvv" Dec 13 14:10:57.166483 kernel: NET: Registered PF_ALG protocol family Dec 13 14:10:57.247431 env[1449]: time="2024-12-13T14:10:57.247029922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-hmlvv,Uid:8f0fb905-356a-42ec-8fa9-f58b73eafd09,Namespace:default,Attempt:0,}" Dec 13 14:10:57.251055 kubelet[1969]: E1213 14:10:57.251013 1969 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:57.266310 kubelet[1969]: E1213 14:10:57.266271 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:57.557587 systemd-networkd[1601]: cilium_host: Gained IPv6LL Dec 13 14:10:57.749554 systemd-networkd[1601]: cilium_net: Gained IPv6LL Dec 13 14:10:57.867687 systemd-networkd[1601]: lxc_health: Link UP Dec 13 14:10:57.879529 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:10:57.879331 systemd-networkd[1601]: lxc_health: Gained carrier Dec 13 14:10:58.069626 systemd-networkd[1601]: cilium_vxlan: Gained IPv6LL Dec 13 14:10:58.266798 kubelet[1969]: E1213 14:10:58.266670 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:58.310409 systemd-networkd[1601]: lxc468285b21e6d: Link UP Dec 13 14:10:58.318473 kernel: eth0: renamed from tmpb14c5 Dec 13 14:10:58.328903 systemd-networkd[1601]: lxc468285b21e6d: Gained carrier Dec 13 14:10:58.329749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc468285b21e6d: link becomes ready Dec 13 14:10:59.267626 kubelet[1969]: E1213 14:10:59.267592 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:10:59.285634 systemd-networkd[1601]: lxc_health: Gained IPv6LL Dec 13 14:10:59.413615 systemd-networkd[1601]: lxc468285b21e6d: Gained IPv6LL Dec 13 14:11:00.268877 kubelet[1969]: E1213 14:11:00.268832 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:01.269486 kubelet[1969]: E1213 14:11:01.269433 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:01.884715 env[1449]: time="2024-12-13T14:11:01.884641406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:01.885067 env[1449]: time="2024-12-13T14:11:01.885040884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:01.885182 env[1449]: time="2024-12-13T14:11:01.885147043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:01.885505 env[1449]: time="2024-12-13T14:11:01.885439402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81 pid=3015 runtime=io.containerd.runc.v2 Dec 13 14:11:01.906413 systemd[1]: run-containerd-runc-k8s.io-b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81-runc.v18fdt.mount: Deactivated successfully. Dec 13 14:11:01.910842 systemd[1]: Started cri-containerd-b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81.scope. Dec 13 14:11:01.941591 env[1449]: time="2024-12-13T14:11:01.941533674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-hmlvv,Uid:8f0fb905-356a-42ec-8fa9-f58b73eafd09,Namespace:default,Attempt:0,} returns sandbox id \"b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81\"" Dec 13 14:11:01.943695 env[1449]: time="2024-12-13T14:11:01.943666024Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:11:02.269679 kubelet[1969]: E1213 14:11:02.269542 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:03.269905 kubelet[1969]: E1213 14:11:03.269855 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:04.270048 kubelet[1969]: E1213 14:11:04.270010 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:04.536530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417667906.mount: Deactivated successfully. Dec 13 14:11:05.270626 kubelet[1969]: E1213 14:11:05.270577 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:05.733763 env[1449]: time="2024-12-13T14:11:05.733707537Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:05.740119 env[1449]: time="2024-12-13T14:11:05.740075632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:05.745182 env[1449]: time="2024-12-13T14:11:05.745144491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:05.749261 env[1449]: time="2024-12-13T14:11:05.749221115Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:05.750085 env[1449]: time="2024-12-13T14:11:05.750046672Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:11:05.752695 env[1449]: time="2024-12-13T14:11:05.752664901Z" level=info msg="CreateContainer within sandbox \"b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:11:05.776510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966739542.mount: Deactivated successfully. Dec 13 14:11:05.782569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945489267.mount: Deactivated successfully. Dec 13 14:11:05.798006 env[1449]: time="2024-12-13T14:11:05.797958919Z" level=info msg="CreateContainer within sandbox \"b14c54bd50d1e0bf1bf8a7d5962cbcd7975b1ac232619d42123eef56857b9d81\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f338828475126505022dc606ec67d0c708a3bbf426480b48ddeb818ed5c241ed\"" Dec 13 14:11:05.798669 env[1449]: time="2024-12-13T14:11:05.798643077Z" level=info msg="StartContainer for \"f338828475126505022dc606ec67d0c708a3bbf426480b48ddeb818ed5c241ed\"" Dec 13 14:11:05.816546 systemd[1]: Started cri-containerd-f338828475126505022dc606ec67d0c708a3bbf426480b48ddeb818ed5c241ed.scope. Dec 13 14:11:05.845912 env[1449]: time="2024-12-13T14:11:05.845851127Z" level=info msg="StartContainer for \"f338828475126505022dc606ec67d0c708a3bbf426480b48ddeb818ed5c241ed\" returns successfully" Dec 13 14:11:06.271205 kubelet[1969]: E1213 14:11:06.271162 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:06.420886 kubelet[1969]: I1213 14:11:06.420836 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-hmlvv" podStartSLOduration=6.612891058 podStartE2EDuration="10.420821019s" podCreationTimestamp="2024-12-13 14:10:56 +0000 UTC" firstStartedPulling="2024-12-13 14:11:01.943048267 +0000 UTC m=+25.399385468" lastFinishedPulling="2024-12-13 14:11:05.750978228 +0000 UTC m=+29.207315429" observedRunningTime="2024-12-13 14:11:06.42054982 +0000 UTC m=+29.876887021" watchObservedRunningTime="2024-12-13 14:11:06.420821019 +0000 UTC m=+29.877158220" Dec 13 14:11:07.271758 kubelet[1969]: E1213 14:11:07.271704 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:08.272570 kubelet[1969]: E1213 14:11:08.272512 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:09.272999 kubelet[1969]: E1213 14:11:09.272957 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:10.273667 kubelet[1969]: E1213 14:11:10.273630 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:11.274414 kubelet[1969]: E1213 14:11:11.274378 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:12.275098 kubelet[1969]: E1213 14:11:12.275064 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:12.671877 kubelet[1969]: I1213 14:11:12.671825 1969 topology_manager.go:215] "Topology Admit Handler" podUID="649ddc3e-e83d-48d6-ae8b-d34849eb4bf6" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:11:12.676227 systemd[1]: Created slice kubepods-besteffort-pod649ddc3e_e83d_48d6_ae8b_d34849eb4bf6.slice. Dec 13 14:11:12.766806 kubelet[1969]: I1213 14:11:12.766769 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/649ddc3e-e83d-48d6-ae8b-d34849eb4bf6-data\") pod \"nfs-server-provisioner-0\" (UID: \"649ddc3e-e83d-48d6-ae8b-d34849eb4bf6\") " pod="default/nfs-server-provisioner-0" Dec 13 14:11:12.766965 kubelet[1969]: I1213 14:11:12.766826 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94n94\" (UniqueName: \"kubernetes.io/projected/649ddc3e-e83d-48d6-ae8b-d34849eb4bf6-kube-api-access-94n94\") pod \"nfs-server-provisioner-0\" (UID: \"649ddc3e-e83d-48d6-ae8b-d34849eb4bf6\") " pod="default/nfs-server-provisioner-0" Dec 13 14:11:12.979671 env[1449]: time="2024-12-13T14:11:12.979230687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:649ddc3e-e83d-48d6-ae8b-d34849eb4bf6,Namespace:default,Attempt:0,}" Dec 13 14:11:13.034066 systemd-networkd[1601]: lxcc57aabbd993b: Link UP Dec 13 14:11:13.044496 kernel: eth0: renamed from tmp1266d Dec 13 14:11:13.058206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:11:13.058340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc57aabbd993b: link becomes ready Dec 13 14:11:13.059596 systemd-networkd[1601]: lxcc57aabbd993b: Gained carrier Dec 13 14:11:13.246751 env[1449]: time="2024-12-13T14:11:13.246281878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:13.246946 env[1449]: time="2024-12-13T14:11:13.246915156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:13.247060 env[1449]: time="2024-12-13T14:11:13.247036075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:13.247369 env[1449]: time="2024-12-13T14:11:13.247319155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5 pid=3138 runtime=io.containerd.runc.v2 Dec 13 14:11:13.265789 systemd[1]: Started cri-containerd-1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5.scope. Dec 13 14:11:13.275488 kubelet[1969]: E1213 14:11:13.275414 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:13.296734 env[1449]: time="2024-12-13T14:11:13.296679431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:649ddc3e-e83d-48d6-ae8b-d34849eb4bf6,Namespace:default,Attempt:0,} returns sandbox id \"1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5\"" Dec 13 14:11:13.298651 env[1449]: time="2024-12-13T14:11:13.298615304Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:11:13.878760 systemd[1]: run-containerd-runc-k8s.io-1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5-runc.9Sr0wQ.mount: Deactivated successfully. Dec 13 14:11:14.276772 kubelet[1969]: E1213 14:11:14.276458 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:14.325587 systemd-networkd[1601]: lxcc57aabbd993b: Gained IPv6LL Dec 13 14:11:15.277610 kubelet[1969]: E1213 14:11:15.277568 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:15.681292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014854680.mount: Deactivated successfully. Dec 13 14:11:16.278028 kubelet[1969]: E1213 14:11:16.277978 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:17.251539 kubelet[1969]: E1213 14:11:17.251494 1969 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:17.283286 kubelet[1969]: E1213 14:11:17.278626 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:17.840384 env[1449]: time="2024-12-13T14:11:17.840333085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:17.876723 env[1449]: time="2024-12-13T14:11:17.876683495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:17.883940 env[1449]: time="2024-12-13T14:11:17.883908873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:17.889372 env[1449]: time="2024-12-13T14:11:17.889331896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:17.890116 env[1449]: time="2024-12-13T14:11:17.890088454Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 14:11:17.893087 env[1449]: time="2024-12-13T14:11:17.892954205Z" level=info msg="CreateContainer within sandbox \"1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:11:17.913206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146526309.mount: Deactivated successfully. Dec 13 14:11:17.918769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843257581.mount: Deactivated successfully. Dec 13 14:11:17.937743 env[1449]: time="2024-12-13T14:11:17.937699669Z" level=info msg="CreateContainer within sandbox \"1266d783ba9003cf72ec84364a5ec9a563e58e16f42b1de89799b08b9cd607a5\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"819b55299009403fe7bf1379e4c5e2d9c7b1f6d646c0248e61045372fa3f8761\"" Dec 13 14:11:17.938612 env[1449]: time="2024-12-13T14:11:17.938588827Z" level=info msg="StartContainer for \"819b55299009403fe7bf1379e4c5e2d9c7b1f6d646c0248e61045372fa3f8761\"" Dec 13 14:11:17.955253 systemd[1]: Started cri-containerd-819b55299009403fe7bf1379e4c5e2d9c7b1f6d646c0248e61045372fa3f8761.scope. Dec 13 14:11:17.984928 env[1449]: time="2024-12-13T14:11:17.984868846Z" level=info msg="StartContainer for \"819b55299009403fe7bf1379e4c5e2d9c7b1f6d646c0248e61045372fa3f8761\" returns successfully" Dec 13 14:11:18.280303 kubelet[1969]: E1213 14:11:18.279704 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:18.442291 kubelet[1969]: I1213 14:11:18.442224 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.848921421 podStartE2EDuration="6.442208565s" podCreationTimestamp="2024-12-13 14:11:12 +0000 UTC" firstStartedPulling="2024-12-13 14:11:13.298093986 +0000 UTC m=+36.754431187" lastFinishedPulling="2024-12-13 14:11:17.89138113 +0000 UTC m=+41.347718331" observedRunningTime="2024-12-13 14:11:18.442061326 +0000 UTC m=+41.898398567" watchObservedRunningTime="2024-12-13 14:11:18.442208565 +0000 UTC m=+41.898545766" Dec 13 14:11:19.279830 kubelet[1969]: E1213 14:11:19.279785 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:20.280497 kubelet[1969]: E1213 14:11:20.280434 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:21.281489 kubelet[1969]: E1213 14:11:21.281426 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:22.282078 kubelet[1969]: E1213 14:11:22.282045 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:23.283083 kubelet[1969]: E1213 14:11:23.283029 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:24.283919 kubelet[1969]: E1213 14:11:24.283876 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:25.285478 kubelet[1969]: E1213 14:11:25.285438 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:26.286306 kubelet[1969]: E1213 14:11:26.286274 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:27.287693 kubelet[1969]: E1213 14:11:27.287653 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:27.561154 kubelet[1969]: I1213 14:11:27.561125 1969 topology_manager.go:215] "Topology Admit Handler" podUID="4378d4b9-41b1-4863-ab47-cb06577df96e" podNamespace="default" podName="test-pod-1" Dec 13 14:11:27.565870 systemd[1]: Created slice kubepods-besteffort-pod4378d4b9_41b1_4863_ab47_cb06577df96e.slice. Dec 13 14:11:27.640365 kubelet[1969]: I1213 14:11:27.640319 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fhr\" (UniqueName: \"kubernetes.io/projected/4378d4b9-41b1-4863-ab47-cb06577df96e-kube-api-access-h2fhr\") pod \"test-pod-1\" (UID: \"4378d4b9-41b1-4863-ab47-cb06577df96e\") " pod="default/test-pod-1" Dec 13 14:11:27.640365 kubelet[1969]: I1213 14:11:27.640367 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-079fdcdf-945b-46ae-b530-a5018774c853\" (UniqueName: \"kubernetes.io/nfs/4378d4b9-41b1-4863-ab47-cb06577df96e-pvc-079fdcdf-945b-46ae-b530-a5018774c853\") pod \"test-pod-1\" (UID: \"4378d4b9-41b1-4863-ab47-cb06577df96e\") " pod="default/test-pod-1" Dec 13 14:11:27.894466 kernel: FS-Cache: Loaded Dec 13 14:11:27.955008 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:11:27.955266 kernel: RPC: Registered udp transport module. Dec 13 14:11:27.959049 kernel: RPC: Registered tcp transport module. Dec 13 14:11:27.964152 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:11:28.051474 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:11:28.260856 kernel: NFS: Registering the id_resolver key type Dec 13 14:11:28.260986 kernel: Key type id_resolver registered Dec 13 14:11:28.264025 kernel: Key type id_legacy registered Dec 13 14:11:28.289249 kubelet[1969]: E1213 14:11:28.289205 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:28.551045 nfsidmap[3253]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-478c04130c' Dec 13 14:11:28.559564 nfsidmap[3255]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-478c04130c' Dec 13 14:11:28.769866 env[1449]: time="2024-12-13T14:11:28.769814043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4378d4b9-41b1-4863-ab47-cb06577df96e,Namespace:default,Attempt:0,}" Dec 13 14:11:28.836182 systemd-networkd[1601]: lxc495c2dbc8fb1: Link UP Dec 13 14:11:28.846461 kernel: eth0: renamed from tmp9100a Dec 13 14:11:28.865926 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:11:28.866034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc495c2dbc8fb1: link becomes ready Dec 13 14:11:28.866424 systemd-networkd[1601]: lxc495c2dbc8fb1: Gained carrier Dec 13 14:11:29.036075 env[1449]: time="2024-12-13T14:11:29.036001200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:29.036075 env[1449]: time="2024-12-13T14:11:29.036038480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:29.036075 env[1449]: time="2024-12-13T14:11:29.036048080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:29.036418 env[1449]: time="2024-12-13T14:11:29.036378120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad pid=3281 runtime=io.containerd.runc.v2 Dec 13 14:11:29.056487 systemd[1]: Started cri-containerd-9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad.scope. Dec 13 14:11:29.085683 env[1449]: time="2024-12-13T14:11:29.085636883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4378d4b9-41b1-4863-ab47-cb06577df96e,Namespace:default,Attempt:0,} returns sandbox id \"9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad\"" Dec 13 14:11:29.088185 env[1449]: time="2024-12-13T14:11:29.087669278Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:11:29.289686 kubelet[1969]: E1213 14:11:29.289637 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:29.460966 env[1449]: time="2024-12-13T14:11:29.460640473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:29.470018 env[1449]: time="2024-12-13T14:11:29.469990011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:29.473045 env[1449]: time="2024-12-13T14:11:29.473013884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:29.476909 env[1449]: time="2024-12-13T14:11:29.476881194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:29.477617 env[1449]: time="2024-12-13T14:11:29.477589873Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:11:29.480167 env[1449]: time="2024-12-13T14:11:29.480134107Z" level=info msg="CreateContainer within sandbox \"9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:11:29.511006 env[1449]: time="2024-12-13T14:11:29.510964354Z" level=info msg="CreateContainer within sandbox \"9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"49a9e121ffb63b0b9ad5fad1d729f575a2b3d1728b8c2607a4bcfeb4544870dd\"" Dec 13 14:11:29.511410 env[1449]: time="2024-12-13T14:11:29.511365993Z" level=info msg="StartContainer for \"49a9e121ffb63b0b9ad5fad1d729f575a2b3d1728b8c2607a4bcfeb4544870dd\"" Dec 13 14:11:29.527890 systemd[1]: Started cri-containerd-49a9e121ffb63b0b9ad5fad1d729f575a2b3d1728b8c2607a4bcfeb4544870dd.scope. Dec 13 14:11:29.555741 env[1449]: time="2024-12-13T14:11:29.555660327Z" level=info msg="StartContainer for \"49a9e121ffb63b0b9ad5fad1d729f575a2b3d1728b8c2607a4bcfeb4544870dd\" returns successfully" Dec 13 14:11:29.804098 systemd[1]: run-containerd-runc-k8s.io-9100a33a04b01ae149a2c9e9b322b7a057dac3d83b7140727b15b3f4d9ff6cad-runc.ztruQJ.mount: Deactivated successfully. Dec 13 14:11:29.941645 systemd-networkd[1601]: lxc495c2dbc8fb1: Gained IPv6LL Dec 13 14:11:30.290693 kubelet[1969]: E1213 14:11:30.290583 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:30.462211 kubelet[1969]: I1213 14:11:30.462159 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.070505686 podStartE2EDuration="17.462144757s" podCreationTimestamp="2024-12-13 14:11:13 +0000 UTC" firstStartedPulling="2024-12-13 14:11:29.087104239 +0000 UTC m=+52.543441440" lastFinishedPulling="2024-12-13 14:11:29.47874335 +0000 UTC m=+52.935080511" observedRunningTime="2024-12-13 14:11:30.461411559 +0000 UTC m=+53.917748720" watchObservedRunningTime="2024-12-13 14:11:30.462144757 +0000 UTC m=+53.918481958" Dec 13 14:11:31.290878 kubelet[1969]: E1213 14:11:31.290821 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:32.291775 kubelet[1969]: E1213 14:11:32.291726 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:33.292267 kubelet[1969]: E1213 14:11:33.292219 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:34.292556 kubelet[1969]: E1213 14:11:34.292517 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:34.603265 systemd[1]: run-containerd-runc-k8s.io-918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701-runc.Bw9CFr.mount: Deactivated successfully. Dec 13 14:11:34.616554 env[1449]: time="2024-12-13T14:11:34.616493003Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:11:34.620964 env[1449]: time="2024-12-13T14:11:34.620937713Z" level=info msg="StopContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" with timeout 2 (s)" Dec 13 14:11:34.621367 env[1449]: time="2024-12-13T14:11:34.621347352Z" level=info msg="Stop container \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" with signal terminated" Dec 13 14:11:34.626836 systemd-networkd[1601]: lxc_health: Link DOWN Dec 13 14:11:34.626842 systemd-networkd[1601]: lxc_health: Lost carrier Dec 13 14:11:34.649953 systemd[1]: cri-containerd-918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701.scope: Deactivated successfully. Dec 13 14:11:34.650243 systemd[1]: cri-containerd-918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701.scope: Consumed 6.105s CPU time. Dec 13 14:11:34.665098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701-rootfs.mount: Deactivated successfully. Dec 13 14:11:35.293535 kubelet[1969]: E1213 14:11:35.293480 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:35.642306 env[1449]: time="2024-12-13T14:11:35.642252850Z" level=info msg="shim disconnected" id=918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701 Dec 13 14:11:35.642306 env[1449]: time="2024-12-13T14:11:35.642302810Z" level=warning msg="cleaning up after shim disconnected" id=918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701 namespace=k8s.io Dec 13 14:11:35.642306 env[1449]: time="2024-12-13T14:11:35.642311530Z" level=info msg="cleaning up dead shim" Dec 13 14:11:35.648604 env[1449]: time="2024-12-13T14:11:35.648559436Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3414 runtime=io.containerd.runc.v2\n" Dec 13 14:11:35.653335 env[1449]: time="2024-12-13T14:11:35.653296426Z" level=info msg="StopContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" returns successfully" Dec 13 14:11:35.653894 env[1449]: time="2024-12-13T14:11:35.653851305Z" level=info msg="StopPodSandbox for \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\"" Dec 13 14:11:35.653982 env[1449]: time="2024-12-13T14:11:35.653906465Z" level=info msg="Container to stop \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:35.653982 env[1449]: time="2024-12-13T14:11:35.653921425Z" level=info msg="Container to stop \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:35.653982 env[1449]: time="2024-12-13T14:11:35.653931825Z" level=info msg="Container to stop \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:35.653982 env[1449]: time="2024-12-13T14:11:35.653944065Z" level=info msg="Container to stop \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:35.653982 env[1449]: time="2024-12-13T14:11:35.653954905Z" level=info msg="Container to stop \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:35.655595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a-shm.mount: Deactivated successfully. Dec 13 14:11:35.661833 systemd[1]: cri-containerd-1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a.scope: Deactivated successfully. Dec 13 14:11:35.681253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a-rootfs.mount: Deactivated successfully. Dec 13 14:11:35.694651 env[1449]: time="2024-12-13T14:11:35.694605659Z" level=info msg="shim disconnected" id=1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a Dec 13 14:11:35.694860 env[1449]: time="2024-12-13T14:11:35.694844338Z" level=warning msg="cleaning up after shim disconnected" id=1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a namespace=k8s.io Dec 13 14:11:35.694916 env[1449]: time="2024-12-13T14:11:35.694904938Z" level=info msg="cleaning up dead shim" Dec 13 14:11:35.702022 env[1449]: time="2024-12-13T14:11:35.701986203Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3444 runtime=io.containerd.runc.v2\n" Dec 13 14:11:35.702417 env[1449]: time="2024-12-13T14:11:35.702394322Z" level=info msg="TearDown network for sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" successfully" Dec 13 14:11:35.702502 env[1449]: time="2024-12-13T14:11:35.702477362Z" level=info msg="StopPodSandbox for \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" returns successfully" Dec 13 14:11:35.782360 kubelet[1969]: I1213 14:11:35.782331 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-hubble-tls\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.782827 kubelet[1969]: I1213 14:11:35.782810 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-etc-cni-netd\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.782960 kubelet[1969]: I1213 14:11:35.782948 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-run\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783050 kubelet[1969]: I1213 14:11:35.783029 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-bpf-maps\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783218 kubelet[1969]: I1213 14:11:35.783126 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.783266 kubelet[1969]: I1213 14:11:35.783146 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.783266 kubelet[1969]: I1213 14:11:35.783160 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.783266 kubelet[1969]: I1213 14:11:35.783193 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-cgroup\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783349 kubelet[1969]: I1213 14:11:35.783294 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87648535-e891-4c04-876d-fe173b27df39-cilium-config-path\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783349 kubelet[1969]: I1213 14:11:35.783317 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-net\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783349 kubelet[1969]: I1213 14:11:35.783337 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-hostproc\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783420 kubelet[1969]: I1213 14:11:35.783351 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-lib-modules\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783420 kubelet[1969]: I1213 14:11:35.783367 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-xtables-lock\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783420 kubelet[1969]: I1213 14:11:35.783386 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m2zw\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-kube-api-access-2m2zw\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783420 kubelet[1969]: I1213 14:11:35.783401 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-kernel\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783420 kubelet[1969]: I1213 14:11:35.783415 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cni-path\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783557 kubelet[1969]: I1213 14:11:35.783434 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87648535-e891-4c04-876d-fe173b27df39-clustermesh-secrets\") pod \"87648535-e891-4c04-876d-fe173b27df39\" (UID: \"87648535-e891-4c04-876d-fe173b27df39\") " Dec 13 14:11:35.783557 kubelet[1969]: I1213 14:11:35.783481 1969 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-etc-cni-netd\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.783557 kubelet[1969]: I1213 14:11:35.783492 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-run\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.783557 kubelet[1969]: I1213 14:11:35.783501 1969 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-bpf-maps\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.783673 kubelet[1969]: I1213 14:11:35.783656 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.783761 kubelet[1969]: I1213 14:11:35.783749 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.785623 kubelet[1969]: I1213 14:11:35.785600 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87648535-e891-4c04-876d-fe173b27df39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:35.785778 kubelet[1969]: I1213 14:11:35.785733 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.785840 kubelet[1969]: I1213 14:11:35.785746 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-hostproc" (OuterVolumeSpecName: "hostproc") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.787488 systemd[1]: var-lib-kubelet-pods-87648535\x2de891\x2d4c04\x2d876d\x2dfe173b27df39-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:35.789244 kubelet[1969]: I1213 14:11:35.789215 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.789523 kubelet[1969]: I1213 14:11:35.789397 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cni-path" (OuterVolumeSpecName: "cni-path") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.789603 kubelet[1969]: I1213 14:11:35.789475 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:35.789663 kubelet[1969]: I1213 14:11:35.789493 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:35.791696 systemd[1]: var-lib-kubelet-pods-87648535\x2de891\x2d4c04\x2d876d\x2dfe173b27df39-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:35.792935 kubelet[1969]: I1213 14:11:35.792908 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87648535-e891-4c04-876d-fe173b27df39-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:35.795325 systemd[1]: var-lib-kubelet-pods-87648535\x2de891\x2d4c04\x2d876d\x2dfe173b27df39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2m2zw.mount: Deactivated successfully. Dec 13 14:11:35.796539 kubelet[1969]: I1213 14:11:35.796513 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-kube-api-access-2m2zw" (OuterVolumeSpecName: "kube-api-access-2m2zw") pod "87648535-e891-4c04-876d-fe173b27df39" (UID: "87648535-e891-4c04-876d-fe173b27df39"). InnerVolumeSpecName "kube-api-access-2m2zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:35.884224 kubelet[1969]: I1213 14:11:35.884198 1969 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2m2zw\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-kube-api-access-2m2zw\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884361 kubelet[1969]: I1213 14:11:35.884350 1969 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-kernel\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884427 kubelet[1969]: I1213 14:11:35.884417 1969 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cni-path\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884517 kubelet[1969]: I1213 14:11:35.884507 1969 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87648535-e891-4c04-876d-fe173b27df39-clustermesh-secrets\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884578 kubelet[1969]: I1213 14:11:35.884568 1969 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-xtables-lock\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884634 kubelet[1969]: I1213 14:11:35.884624 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-cilium-cgroup\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884690 kubelet[1969]: I1213 14:11:35.884679 1969 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87648535-e891-4c04-876d-fe173b27df39-hubble-tls\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884750 kubelet[1969]: I1213 14:11:35.884739 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87648535-e891-4c04-876d-fe173b27df39-cilium-config-path\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884810 kubelet[1969]: I1213 14:11:35.884800 1969 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-lib-modules\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884868 kubelet[1969]: I1213 14:11:35.884857 1969 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-host-proc-sys-net\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:35.884925 kubelet[1969]: I1213 14:11:35.884915 1969 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87648535-e891-4c04-876d-fe173b27df39-hostproc\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:36.293947 kubelet[1969]: E1213 14:11:36.293909 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:36.463330 kubelet[1969]: I1213 14:11:36.463305 1969 scope.go:117] "RemoveContainer" containerID="918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701" Dec 13 14:11:36.466754 systemd[1]: Removed slice kubepods-burstable-pod87648535_e891_4c04_876d_fe173b27df39.slice. Dec 13 14:11:36.466829 systemd[1]: kubepods-burstable-pod87648535_e891_4c04_876d_fe173b27df39.slice: Consumed 6.197s CPU time. Dec 13 14:11:36.469393 env[1449]: time="2024-12-13T14:11:36.469354351Z" level=info msg="RemoveContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\"" Dec 13 14:11:36.486497 env[1449]: time="2024-12-13T14:11:36.486436075Z" level=info msg="RemoveContainer for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" returns successfully" Dec 13 14:11:36.486765 kubelet[1969]: I1213 14:11:36.486742 1969 scope.go:117] "RemoveContainer" containerID="7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc" Dec 13 14:11:36.488012 env[1449]: time="2024-12-13T14:11:36.487771472Z" level=info msg="RemoveContainer for \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\"" Dec 13 14:11:36.495103 env[1449]: time="2024-12-13T14:11:36.495015777Z" level=info msg="RemoveContainer for \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\" returns successfully" Dec 13 14:11:36.495229 kubelet[1969]: I1213 14:11:36.495202 1969 scope.go:117] "RemoveContainer" containerID="e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c" Dec 13 14:11:36.496121 env[1449]: time="2024-12-13T14:11:36.496092855Z" level=info msg="RemoveContainer for \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\"" Dec 13 14:11:36.502114 env[1449]: time="2024-12-13T14:11:36.502081362Z" level=info msg="RemoveContainer for \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\" returns successfully" Dec 13 14:11:36.502282 kubelet[1969]: I1213 14:11:36.502260 1969 scope.go:117] "RemoveContainer" containerID="66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f" Dec 13 14:11:36.503300 env[1449]: time="2024-12-13T14:11:36.503271880Z" level=info msg="RemoveContainer for \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\"" Dec 13 14:11:36.509532 env[1449]: time="2024-12-13T14:11:36.509502627Z" level=info msg="RemoveContainer for \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\" returns successfully" Dec 13 14:11:36.509703 kubelet[1969]: I1213 14:11:36.509682 1969 scope.go:117] "RemoveContainer" containerID="feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961" Dec 13 14:11:36.510851 env[1449]: time="2024-12-13T14:11:36.510655265Z" level=info msg="RemoveContainer for \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\"" Dec 13 14:11:36.517846 env[1449]: time="2024-12-13T14:11:36.517819010Z" level=info msg="RemoveContainer for \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\" returns successfully" Dec 13 14:11:36.518108 kubelet[1969]: I1213 14:11:36.518080 1969 scope.go:117] "RemoveContainer" containerID="918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701" Dec 13 14:11:36.518464 env[1449]: time="2024-12-13T14:11:36.518387608Z" level=error msg="ContainerStatus for \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\": not found" Dec 13 14:11:36.518613 kubelet[1969]: E1213 14:11:36.518587 1969 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\": not found" containerID="918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701" Dec 13 14:11:36.518704 kubelet[1969]: I1213 14:11:36.518620 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701"} err="failed to get container status \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\": rpc error: code = NotFound desc = an error occurred when try to find container \"918db857894c48afbd1f197186779ad0725c1de4c122e71d83cacce34fd44701\": not found" Dec 13 14:11:36.518704 kubelet[1969]: I1213 14:11:36.518701 1969 scope.go:117] "RemoveContainer" containerID="7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc" Dec 13 14:11:36.518904 env[1449]: time="2024-12-13T14:11:36.518856967Z" level=error msg="ContainerStatus for \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\": not found" Dec 13 14:11:36.519049 kubelet[1969]: E1213 14:11:36.519025 1969 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\": not found" containerID="7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc" Dec 13 14:11:36.519133 kubelet[1969]: I1213 14:11:36.519115 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc"} err="failed to get container status \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a72a3a3fbd99cbd7027239509a9eb01a92b55eee5a36d7df01eaf674a1c86bc\": not found" Dec 13 14:11:36.519210 kubelet[1969]: I1213 14:11:36.519199 1969 scope.go:117] "RemoveContainer" containerID="e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c" Dec 13 14:11:36.519474 env[1449]: time="2024-12-13T14:11:36.519414566Z" level=error msg="ContainerStatus for \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\": not found" Dec 13 14:11:36.519584 kubelet[1969]: E1213 14:11:36.519561 1969 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\": not found" containerID="e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c" Dec 13 14:11:36.519628 kubelet[1969]: I1213 14:11:36.519585 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c"} err="failed to get container status \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6e43e0d2d774fecc5d173b6650d6ebe62b8eb9ae79e4e95bc819d2f421c715c\": not found" Dec 13 14:11:36.519628 kubelet[1969]: I1213 14:11:36.519605 1969 scope.go:117] "RemoveContainer" containerID="66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f" Dec 13 14:11:36.519870 env[1449]: time="2024-12-13T14:11:36.519823445Z" level=error msg="ContainerStatus for \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\": not found" Dec 13 14:11:36.519981 kubelet[1969]: E1213 14:11:36.519957 1969 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\": not found" containerID="66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f" Dec 13 14:11:36.520026 kubelet[1969]: I1213 14:11:36.519983 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f"} err="failed to get container status \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"66b94d5e612024b8ccfb434c7eaa7d500e50edb7e4734db1201e65ce3b24ab1f\": not found" Dec 13 14:11:36.520026 kubelet[1969]: I1213 14:11:36.519998 1969 scope.go:117] "RemoveContainer" containerID="feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961" Dec 13 14:11:36.520264 env[1449]: time="2024-12-13T14:11:36.520188885Z" level=error msg="ContainerStatus for \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\": not found" Dec 13 14:11:36.520344 kubelet[1969]: E1213 14:11:36.520319 1969 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\": not found" containerID="feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961" Dec 13 14:11:36.520383 kubelet[1969]: I1213 14:11:36.520341 1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961"} err="failed to get container status \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\": rpc error: code = NotFound desc = an error occurred when try to find container \"feaf28f90a90986102146d7acd4d632f1d47d81711c6ab9e23ceb9a1f561f961\": not found" Dec 13 14:11:37.251707 kubelet[1969]: E1213 14:11:37.251667 1969 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:37.262618 env[1449]: time="2024-12-13T14:11:37.262427305Z" level=info msg="StopPodSandbox for \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\"" Dec 13 14:11:37.262618 env[1449]: time="2024-12-13T14:11:37.262532705Z" level=info msg="TearDown network for sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" successfully" Dec 13 14:11:37.262618 env[1449]: time="2024-12-13T14:11:37.262563465Z" level=info msg="StopPodSandbox for \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" returns successfully" Dec 13 14:11:37.263475 env[1449]: time="2024-12-13T14:11:37.263238343Z" level=info msg="RemovePodSandbox for \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\"" Dec 13 14:11:37.263475 env[1449]: time="2024-12-13T14:11:37.263262463Z" level=info msg="Forcibly stopping sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\"" Dec 13 14:11:37.263475 env[1449]: time="2024-12-13T14:11:37.263316663Z" level=info msg="TearDown network for sandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" successfully" Dec 13 14:11:37.273563 env[1449]: time="2024-12-13T14:11:37.273529482Z" level=info msg="RemovePodSandbox \"1ba9250cc48cdb42bd520d873f006d47388fe32855ea2d43150b2d255401647a\" returns successfully" Dec 13 14:11:37.294623 kubelet[1969]: E1213 14:11:37.294594 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:37.339262 kubelet[1969]: E1213 14:11:37.339233 1969 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:37.356863 kubelet[1969]: I1213 14:11:37.356834 1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87648535-e891-4c04-876d-fe173b27df39" path="/var/lib/kubelet/pods/87648535-e891-4c04-876d-fe173b27df39/volumes" Dec 13 14:11:38.295272 kubelet[1969]: E1213 14:11:38.295239 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:38.324590 kubelet[1969]: I1213 14:11:38.324537 1969 setters.go:580] "Node became not ready" node="10.200.20.43" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:11:38Z","lastTransitionTime":"2024-12-13T14:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:11:38.741131 kubelet[1969]: I1213 14:11:38.740783 1969 topology_manager.go:215] "Topology Admit Handler" podUID="c53175cb-2990-454d-9086-18d9dd7dc5e1" podNamespace="kube-system" podName="cilium-operator-599987898-2bprz" Dec 13 14:11:38.741131 kubelet[1969]: E1213 14:11:38.740839 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="apply-sysctl-overwrites" Dec 13 14:11:38.741131 kubelet[1969]: E1213 14:11:38.740860 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="cilium-agent" Dec 13 14:11:38.741131 kubelet[1969]: E1213 14:11:38.740867 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="mount-cgroup" Dec 13 14:11:38.741131 kubelet[1969]: E1213 14:11:38.740872 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="mount-bpf-fs" Dec 13 14:11:38.741131 kubelet[1969]: E1213 14:11:38.740879 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="clean-cilium-state" Dec 13 14:11:38.741131 kubelet[1969]: I1213 14:11:38.740896 1969 memory_manager.go:354] "RemoveStaleState removing state" podUID="87648535-e891-4c04-876d-fe173b27df39" containerName="cilium-agent" Dec 13 14:11:38.745243 systemd[1]: Created slice kubepods-besteffort-podc53175cb_2990_454d_9086_18d9dd7dc5e1.slice. Dec 13 14:11:38.763976 kubelet[1969]: I1213 14:11:38.763945 1969 topology_manager.go:215] "Topology Admit Handler" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" podNamespace="kube-system" podName="cilium-rcxlm" Dec 13 14:11:38.769028 systemd[1]: Created slice kubepods-burstable-poda6c87b34_7116_4650_a4b2_e3e247fc57f2.slice. Dec 13 14:11:38.799682 kubelet[1969]: I1213 14:11:38.799640 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-cgroup\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799682 kubelet[1969]: I1213 14:11:38.799682 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-net\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799860 kubelet[1969]: I1213 14:11:38.799700 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-config-path\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799860 kubelet[1969]: I1213 14:11:38.799716 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c53175cb-2990-454d-9086-18d9dd7dc5e1-cilium-config-path\") pod \"cilium-operator-599987898-2bprz\" (UID: \"c53175cb-2990-454d-9086-18d9dd7dc5e1\") " pod="kube-system/cilium-operator-599987898-2bprz" Dec 13 14:11:38.799860 kubelet[1969]: I1213 14:11:38.799733 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcttl\" (UniqueName: \"kubernetes.io/projected/c53175cb-2990-454d-9086-18d9dd7dc5e1-kube-api-access-zcttl\") pod \"cilium-operator-599987898-2bprz\" (UID: \"c53175cb-2990-454d-9086-18d9dd7dc5e1\") " pod="kube-system/cilium-operator-599987898-2bprz" Dec 13 14:11:38.799860 kubelet[1969]: I1213 14:11:38.799748 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-run\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799860 kubelet[1969]: I1213 14:11:38.799763 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hostproc\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799777 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cni-path\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799793 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-lib-modules\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799806 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-ipsec-secrets\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799820 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-kernel\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799836 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-bpf-maps\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.799982 kubelet[1969]: I1213 14:11:38.799850 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-etc-cni-netd\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.800112 kubelet[1969]: I1213 14:11:38.799863 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-xtables-lock\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.800112 kubelet[1969]: I1213 14:11:38.799879 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-clustermesh-secrets\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.800112 kubelet[1969]: I1213 14:11:38.799893 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hubble-tls\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:38.800112 kubelet[1969]: I1213 14:11:38.799907 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqtz\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-kube-api-access-vlqtz\") pod \"cilium-rcxlm\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " pod="kube-system/cilium-rcxlm" Dec 13 14:11:39.049530 env[1449]: time="2024-12-13T14:11:39.049257639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2bprz,Uid:c53175cb-2990-454d-9086-18d9dd7dc5e1,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:39.075802 env[1449]: time="2024-12-13T14:11:39.075589627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rcxlm,Uid:a6c87b34-7116-4650-a4b2-e3e247fc57f2,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:39.084655 env[1449]: time="2024-12-13T14:11:39.084576649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:39.084839 env[1449]: time="2024-12-13T14:11:39.084816729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:39.084958 env[1449]: time="2024-12-13T14:11:39.084938168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:39.085208 env[1449]: time="2024-12-13T14:11:39.085177048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20123da6ac1ca21e897df74a54244c5e811e859b52a58c7a3a676f008bab4935 pid=3473 runtime=io.containerd.runc.v2 Dec 13 14:11:39.096310 systemd[1]: Started cri-containerd-20123da6ac1ca21e897df74a54244c5e811e859b52a58c7a3a676f008bab4935.scope. Dec 13 14:11:39.117191 env[1449]: time="2024-12-13T14:11:39.117114905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:39.117352 env[1449]: time="2024-12-13T14:11:39.117324344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:39.117470 env[1449]: time="2024-12-13T14:11:39.117420664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:39.117731 env[1449]: time="2024-12-13T14:11:39.117690903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de pid=3509 runtime=io.containerd.runc.v2 Dec 13 14:11:39.132488 systemd[1]: Started cri-containerd-afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de.scope. Dec 13 14:11:39.144346 env[1449]: time="2024-12-13T14:11:39.144292411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2bprz,Uid:c53175cb-2990-454d-9086-18d9dd7dc5e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"20123da6ac1ca21e897df74a54244c5e811e859b52a58c7a3a676f008bab4935\"" Dec 13 14:11:39.146154 env[1449]: time="2024-12-13T14:11:39.145927407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:11:39.158980 env[1449]: time="2024-12-13T14:11:39.158935302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rcxlm,Uid:a6c87b34-7116-4650-a4b2-e3e247fc57f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\"" Dec 13 14:11:39.161787 env[1449]: time="2024-12-13T14:11:39.161746416Z" level=info msg="CreateContainer within sandbox \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:39.194464 env[1449]: time="2024-12-13T14:11:39.194407311Z" level=info msg="CreateContainer within sandbox \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\"" Dec 13 14:11:39.195053 env[1449]: time="2024-12-13T14:11:39.195030110Z" level=info msg="StartContainer for \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\"" Dec 13 14:11:39.208700 systemd[1]: Started cri-containerd-69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d.scope. Dec 13 14:11:39.218557 systemd[1]: cri-containerd-69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d.scope: Deactivated successfully. Dec 13 14:11:39.218822 systemd[1]: Stopped cri-containerd-69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d.scope. Dec 13 14:11:39.288199 env[1449]: time="2024-12-13T14:11:39.288152966Z" level=info msg="shim disconnected" id=69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d Dec 13 14:11:39.288417 env[1449]: time="2024-12-13T14:11:39.288399965Z" level=warning msg="cleaning up after shim disconnected" id=69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d namespace=k8s.io Dec 13 14:11:39.288500 env[1449]: time="2024-12-13T14:11:39.288486725Z" level=info msg="cleaning up dead shim" Dec 13 14:11:39.294884 env[1449]: time="2024-12-13T14:11:39.294843392Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3575 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:11:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:11:39.295294 env[1449]: time="2024-12-13T14:11:39.295206912Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Dec 13 14:11:39.295810 env[1449]: time="2024-12-13T14:11:39.295542631Z" level=error msg="Failed to pipe stdout of container \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\"" error="reading from a closed fifo" Dec 13 14:11:39.295937 env[1449]: time="2024-12-13T14:11:39.295577711Z" level=error msg="Failed to pipe stderr of container \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\"" error="reading from a closed fifo" Dec 13 14:11:39.296129 kubelet[1969]: E1213 14:11:39.296102 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:39.299837 env[1449]: time="2024-12-13T14:11:39.299747783Z" level=error msg="StartContainer for \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:11:39.300426 kubelet[1969]: E1213 14:11:39.300076 1969 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d" Dec 13 14:11:39.300426 kubelet[1969]: E1213 14:11:39.300203 1969 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:11:39.300426 kubelet[1969]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:11:39.300426 kubelet[1969]: rm /hostbin/cilium-mount Dec 13 14:11:39.300639 kubelet[1969]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlqtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rcxlm_kube-system(a6c87b34-7116-4650-a4b2-e3e247fc57f2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:11:39.300710 kubelet[1969]: E1213 14:11:39.300230 1969 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rcxlm" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" Dec 13 14:11:39.472392 env[1449]: time="2024-12-13T14:11:39.472352521Z" level=info msg="CreateContainer within sandbox \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:11:39.513595 env[1449]: time="2024-12-13T14:11:39.513547599Z" level=info msg="CreateContainer within sandbox \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\"" Dec 13 14:11:39.514530 env[1449]: time="2024-12-13T14:11:39.514483117Z" level=info msg="StartContainer for \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\"" Dec 13 14:11:39.527828 systemd[1]: Started cri-containerd-9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128.scope. Dec 13 14:11:39.537510 systemd[1]: cri-containerd-9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128.scope: Deactivated successfully. Dec 13 14:11:39.537762 systemd[1]: Stopped cri-containerd-9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128.scope. Dec 13 14:11:39.558896 env[1449]: time="2024-12-13T14:11:39.558774229Z" level=info msg="shim disconnected" id=9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128 Dec 13 14:11:39.558896 env[1449]: time="2024-12-13T14:11:39.558823829Z" level=warning msg="cleaning up after shim disconnected" id=9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128 namespace=k8s.io Dec 13 14:11:39.558896 env[1449]: time="2024-12-13T14:11:39.558832909Z" level=info msg="cleaning up dead shim" Dec 13 14:11:39.566072 env[1449]: time="2024-12-13T14:11:39.566022255Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3612 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:11:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:11:39.566300 env[1449]: time="2024-12-13T14:11:39.566247734Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Dec 13 14:11:39.566507 env[1449]: time="2024-12-13T14:11:39.566473654Z" level=error msg="Failed to pipe stderr of container \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\"" error="reading from a closed fifo" Dec 13 14:11:39.566633 env[1449]: time="2024-12-13T14:11:39.566611214Z" level=error msg="Failed to pipe stdout of container \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\"" error="reading from a closed fifo" Dec 13 14:11:39.570916 env[1449]: time="2024-12-13T14:11:39.570869685Z" level=error msg="StartContainer for \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:11:39.571469 kubelet[1969]: E1213 14:11:39.571092 1969 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128" Dec 13 14:11:39.571469 kubelet[1969]: E1213 14:11:39.571205 1969 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:11:39.571469 kubelet[1969]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:11:39.571469 kubelet[1969]: rm /hostbin/cilium-mount Dec 13 14:11:39.571638 kubelet[1969]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlqtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rcxlm_kube-system(a6c87b34-7116-4650-a4b2-e3e247fc57f2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:11:39.571731 kubelet[1969]: E1213 14:11:39.571232 1969 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rcxlm" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" Dec 13 14:11:40.296430 kubelet[1969]: E1213 14:11:40.296374 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:40.472481 kubelet[1969]: I1213 14:11:40.472359 1969 scope.go:117] "RemoveContainer" containerID="69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d" Dec 13 14:11:40.473151 env[1449]: time="2024-12-13T14:11:40.472855913Z" level=info msg="StopPodSandbox for \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\"" Dec 13 14:11:40.473151 env[1449]: time="2024-12-13T14:11:40.472980873Z" level=info msg="Container to stop \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:40.473151 env[1449]: time="2024-12-13T14:11:40.472997193Z" level=info msg="Container to stop \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:40.474927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de-shm.mount: Deactivated successfully. Dec 13 14:11:40.477313 env[1449]: time="2024-12-13T14:11:40.477282345Z" level=info msg="RemoveContainer for \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\"" Dec 13 14:11:40.482012 systemd[1]: cri-containerd-afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de.scope: Deactivated successfully. Dec 13 14:11:40.487000 env[1449]: time="2024-12-13T14:11:40.486906566Z" level=info msg="RemoveContainer for \"69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d\" returns successfully" Dec 13 14:11:40.503360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de-rootfs.mount: Deactivated successfully. Dec 13 14:11:40.519117 env[1449]: time="2024-12-13T14:11:40.519071903Z" level=info msg="shim disconnected" id=afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de Dec 13 14:11:40.519303 env[1449]: time="2024-12-13T14:11:40.519287063Z" level=warning msg="cleaning up after shim disconnected" id=afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de namespace=k8s.io Dec 13 14:11:40.519358 env[1449]: time="2024-12-13T14:11:40.519347383Z" level=info msg="cleaning up dead shim" Dec 13 14:11:40.527161 env[1449]: time="2024-12-13T14:11:40.527123767Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3642 runtime=io.containerd.runc.v2\n" Dec 13 14:11:40.527587 env[1449]: time="2024-12-13T14:11:40.527559647Z" level=info msg="TearDown network for sandbox \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" successfully" Dec 13 14:11:40.527678 env[1449]: time="2024-12-13T14:11:40.527660686Z" level=info msg="StopPodSandbox for \"afd27e6188cfd0a54a64fa0bdeb4583bdfe6261dabaf80d40cb8a01f71f1b3de\" returns successfully" Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615432 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cni-path\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615481 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-lib-modules\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615497 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-xtables-lock\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615513 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-net\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615536 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-config-path\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.617868 kubelet[1969]: I1213 14:11:40.615498 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cni-path" (OuterVolumeSpecName: "cni-path") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.618145 kubelet[1969]: I1213 14:11:40.615529 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.618145 kubelet[1969]: I1213 14:11:40.615541 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.618145 kubelet[1969]: I1213 14:11:40.615552 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.618145 kubelet[1969]: I1213 14:11:40.615555 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-clustermesh-secrets\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618145 kubelet[1969]: I1213 14:11:40.615623 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlqtz\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-kube-api-access-vlqtz\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615641 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-cgroup\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615657 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hostproc\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615681 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-etc-cni-netd\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615695 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-kernel\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615709 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-bpf-maps\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618261 kubelet[1969]: I1213 14:11:40.615725 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hubble-tls\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615741 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-run\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615769 1969 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-ipsec-secrets\") pod \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\" (UID: \"a6c87b34-7116-4650-a4b2-e3e247fc57f2\") " Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615806 1969 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cni-path\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615816 1969 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-lib-modules\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615839 1969 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-xtables-lock\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.618393 kubelet[1969]: I1213 14:11:40.615849 1969 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-net\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.621132 kubelet[1969]: I1213 14:11:40.619277 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:40.621132 kubelet[1969]: I1213 14:11:40.619342 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.621132 kubelet[1969]: I1213 14:11:40.619817 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.621132 kubelet[1969]: I1213 14:11:40.619842 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.620106 systemd[1]: var-lib-kubelet-pods-a6c87b34\x2d7116\x2d4650\x2da4b2\x2de3e247fc57f2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:40.623381 kubelet[1969]: I1213 14:11:40.623334 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.623520 kubelet[1969]: I1213 14:11:40.623387 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hostproc" (OuterVolumeSpecName: "hostproc") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.623688 kubelet[1969]: I1213 14:11:40.623668 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:40.623781 kubelet[1969]: I1213 14:11:40.623767 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.627870 kubelet[1969]: I1213 14:11:40.626539 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-kube-api-access-vlqtz" (OuterVolumeSpecName: "kube-api-access-vlqtz") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "kube-api-access-vlqtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:40.626973 systemd[1]: var-lib-kubelet-pods-a6c87b34\x2d7116\x2d4650\x2da4b2\x2de3e247fc57f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvlqtz.mount: Deactivated successfully. Dec 13 14:11:40.627060 systemd[1]: var-lib-kubelet-pods-a6c87b34\x2d7116\x2d4650\x2da4b2\x2de3e247fc57f2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:40.628728 kubelet[1969]: I1213 14:11:40.628704 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:40.629059 kubelet[1969]: I1213 14:11:40.629028 1969 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a6c87b34-7116-4650-a4b2-e3e247fc57f2" (UID: "a6c87b34-7116-4650-a4b2-e3e247fc57f2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:40.716366 kubelet[1969]: I1213 14:11:40.716305 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-config-path\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716366 kubelet[1969]: I1213 14:11:40.716359 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-cgroup\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716366 kubelet[1969]: I1213 14:11:40.716370 1969 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hostproc\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716366 kubelet[1969]: I1213 14:11:40.716377 1969 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-etc-cni-netd\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716386 1969 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-clustermesh-secrets\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716396 1969 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vlqtz\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-kube-api-access-vlqtz\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716404 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-run\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716412 1969 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6c87b34-7116-4650-a4b2-e3e247fc57f2-cilium-ipsec-secrets\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716420 1969 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-host-proc-sys-kernel\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716427 1969 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c87b34-7116-4650-a4b2-e3e247fc57f2-bpf-maps\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.716619 kubelet[1969]: I1213 14:11:40.716453 1969 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c87b34-7116-4650-a4b2-e3e247fc57f2-hubble-tls\") on node \"10.200.20.43\" DevicePath \"\"" Dec 13 14:11:40.907826 systemd[1]: var-lib-kubelet-pods-a6c87b34\x2d7116\x2d4650\x2da4b2\x2de3e247fc57f2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:41.296819 kubelet[1969]: E1213 14:11:41.296778 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:41.360526 systemd[1]: Removed slice kubepods-burstable-poda6c87b34_7116_4650_a4b2_e3e247fc57f2.slice. Dec 13 14:11:41.461493 env[1449]: time="2024-12-13T14:11:41.460989842Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:41.468254 env[1449]: time="2024-12-13T14:11:41.468219268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:41.473393 env[1449]: time="2024-12-13T14:11:41.473364098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:11:41.473721 env[1449]: time="2024-12-13T14:11:41.473695457Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:11:41.477083 kubelet[1969]: I1213 14:11:41.476343 1969 scope.go:117] "RemoveContainer" containerID="9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128" Dec 13 14:11:41.477166 env[1449]: time="2024-12-13T14:11:41.476477652Z" level=info msg="CreateContainer within sandbox \"20123da6ac1ca21e897df74a54244c5e811e859b52a58c7a3a676f008bab4935\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:11:41.480328 env[1449]: time="2024-12-13T14:11:41.480296405Z" level=info msg="RemoveContainer for \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\"" Dec 13 14:11:41.491985 env[1449]: time="2024-12-13T14:11:41.491952502Z" level=info msg="RemoveContainer for \"9cd0be4a8c904ed03025165dc76b1aea94abdc255bd7d1c92026193f09fb4128\" returns successfully" Dec 13 14:11:41.508813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200886292.mount: Deactivated successfully. Dec 13 14:11:41.516533 kubelet[1969]: I1213 14:11:41.516500 1969 topology_manager.go:215] "Topology Admit Handler" podUID="96e4c046-7356-4fd4-979b-37f247599ba0" podNamespace="kube-system" podName="cilium-xhskk" Dec 13 14:11:41.516621 kubelet[1969]: E1213 14:11:41.516564 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" containerName="mount-cgroup" Dec 13 14:11:41.516621 kubelet[1969]: I1213 14:11:41.516585 1969 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" containerName="mount-cgroup" Dec 13 14:11:41.516621 kubelet[1969]: E1213 14:11:41.516604 1969 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" containerName="mount-cgroup" Dec 13 14:11:41.516695 kubelet[1969]: I1213 14:11:41.516633 1969 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" containerName="mount-cgroup" Dec 13 14:11:41.522093 systemd[1]: Created slice kubepods-burstable-pod96e4c046_7356_4fd4_979b_37f247599ba0.slice. Dec 13 14:11:41.532212 env[1449]: time="2024-12-13T14:11:41.532164665Z" level=info msg="CreateContainer within sandbox \"20123da6ac1ca21e897df74a54244c5e811e859b52a58c7a3a676f008bab4935\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"23bec53c691e6a537dbf1b44ccaa3bced60fbab44648d72376c4260a012f9f41\"" Dec 13 14:11:41.532932 env[1449]: time="2024-12-13T14:11:41.532906584Z" level=info msg="StartContainer for \"23bec53c691e6a537dbf1b44ccaa3bced60fbab44648d72376c4260a012f9f41\"" Dec 13 14:11:41.549851 systemd[1]: Started cri-containerd-23bec53c691e6a537dbf1b44ccaa3bced60fbab44648d72376c4260a012f9f41.scope. Dec 13 14:11:41.581370 env[1449]: time="2024-12-13T14:11:41.581322651Z" level=info msg="StartContainer for \"23bec53c691e6a537dbf1b44ccaa3bced60fbab44648d72376c4260a012f9f41\" returns successfully" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621116 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-cilium-cgroup\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621151 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-host-proc-sys-kernel\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621170 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79c59\" (UniqueName: \"kubernetes.io/projected/96e4c046-7356-4fd4-979b-37f247599ba0-kube-api-access-79c59\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621187 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96e4c046-7356-4fd4-979b-37f247599ba0-cilium-ipsec-secrets\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621203 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-cilium-run\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621474 kubelet[1969]: I1213 14:11:41.621218 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-bpf-maps\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621231 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-lib-modules\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621245 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96e4c046-7356-4fd4-979b-37f247599ba0-clustermesh-secrets\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621259 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-hostproc\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621273 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-host-proc-sys-net\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621287 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-cni-path\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621737 kubelet[1969]: I1213 14:11:41.621303 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-etc-cni-netd\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621875 kubelet[1969]: I1213 14:11:41.621317 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96e4c046-7356-4fd4-979b-37f247599ba0-xtables-lock\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621875 kubelet[1969]: I1213 14:11:41.621331 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96e4c046-7356-4fd4-979b-37f247599ba0-cilium-config-path\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.621875 kubelet[1969]: I1213 14:11:41.621346 1969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96e4c046-7356-4fd4-979b-37f247599ba0-hubble-tls\") pod \"cilium-xhskk\" (UID: \"96e4c046-7356-4fd4-979b-37f247599ba0\") " pod="kube-system/cilium-xhskk" Dec 13 14:11:41.830222 env[1449]: time="2024-12-13T14:11:41.830087014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhskk,Uid:96e4c046-7356-4fd4-979b-37f247599ba0,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:41.863410 env[1449]: time="2024-12-13T14:11:41.863348870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:41.863604 env[1449]: time="2024-12-13T14:11:41.863581870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:41.863701 env[1449]: time="2024-12-13T14:11:41.863680710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:41.863983 env[1449]: time="2024-12-13T14:11:41.863953709Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6 pid=3707 runtime=io.containerd.runc.v2 Dec 13 14:11:41.873294 systemd[1]: Started cri-containerd-db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6.scope. Dec 13 14:11:41.892472 env[1449]: time="2024-12-13T14:11:41.892412574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhskk,Uid:96e4c046-7356-4fd4-979b-37f247599ba0,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\"" Dec 13 14:11:41.895285 env[1449]: time="2024-12-13T14:11:41.895258689Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:41.933436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627059529.mount: Deactivated successfully. Dec 13 14:11:41.944210 env[1449]: time="2024-12-13T14:11:41.944163595Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c\"" Dec 13 14:11:41.944860 env[1449]: time="2024-12-13T14:11:41.944827354Z" level=info msg="StartContainer for \"23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c\"" Dec 13 14:11:41.962314 systemd[1]: Started cri-containerd-23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c.scope. Dec 13 14:11:41.988327 env[1449]: time="2024-12-13T14:11:41.988274311Z" level=info msg="StartContainer for \"23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c\" returns successfully" Dec 13 14:11:41.994205 systemd[1]: cri-containerd-23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c.scope: Deactivated successfully. Dec 13 14:11:42.297065 kubelet[1969]: E1213 14:11:42.297027 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:42.340951 kubelet[1969]: E1213 14:11:42.340916 1969 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:42.341938 env[1449]: time="2024-12-13T14:11:42.341893563Z" level=info msg="shim disconnected" id=23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c Dec 13 14:11:42.341938 env[1449]: time="2024-12-13T14:11:42.341936363Z" level=warning msg="cleaning up after shim disconnected" id=23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c namespace=k8s.io Dec 13 14:11:42.342068 env[1449]: time="2024-12-13T14:11:42.341945763Z" level=info msg="cleaning up dead shim" Dec 13 14:11:42.349553 env[1449]: time="2024-12-13T14:11:42.349514589Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Dec 13 14:11:42.393246 kubelet[1969]: W1213 14:11:42.393182 1969 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6c87b34_7116_4650_a4b2_e3e247fc57f2.slice/cri-containerd-69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d.scope WatchSource:0}: container "69e3c54c7899baa57ca892efb1baf01c7bb847007c10a9352be2fbeee3320a2d" in namespace "k8s.io": not found Dec 13 14:11:42.484182 env[1449]: time="2024-12-13T14:11:42.484145775Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:11:42.518792 env[1449]: time="2024-12-13T14:11:42.518708509Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b\"" Dec 13 14:11:42.519510 env[1449]: time="2024-12-13T14:11:42.519488468Z" level=info msg="StartContainer for \"c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b\"" Dec 13 14:11:42.521179 kubelet[1969]: I1213 14:11:42.521133 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2bprz" podStartSLOduration=2.192044818 podStartE2EDuration="4.521116625s" podCreationTimestamp="2024-12-13 14:11:38 +0000 UTC" firstStartedPulling="2024-12-13 14:11:39.145657168 +0000 UTC m=+62.601994369" lastFinishedPulling="2024-12-13 14:11:41.474729015 +0000 UTC m=+64.931066176" observedRunningTime="2024-12-13 14:11:42.492156759 +0000 UTC m=+65.948493960" watchObservedRunningTime="2024-12-13 14:11:42.521116625 +0000 UTC m=+65.977453786" Dec 13 14:11:42.536308 systemd[1]: Started cri-containerd-c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b.scope. Dec 13 14:11:42.563113 env[1449]: time="2024-12-13T14:11:42.562341747Z" level=info msg="StartContainer for \"c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b\" returns successfully" Dec 13 14:11:42.567276 systemd[1]: cri-containerd-c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b.scope: Deactivated successfully. Dec 13 14:11:42.595119 env[1449]: time="2024-12-13T14:11:42.595059845Z" level=info msg="shim disconnected" id=c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b Dec 13 14:11:42.595119 env[1449]: time="2024-12-13T14:11:42.595107485Z" level=warning msg="cleaning up after shim disconnected" id=c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b namespace=k8s.io Dec 13 14:11:42.595119 env[1449]: time="2024-12-13T14:11:42.595118285Z" level=info msg="cleaning up dead shim" Dec 13 14:11:42.602087 env[1449]: time="2024-12-13T14:11:42.602029672Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3856 runtime=io.containerd.runc.v2\n" Dec 13 14:11:42.908288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c-rootfs.mount: Deactivated successfully. Dec 13 14:11:43.297147 kubelet[1969]: E1213 14:11:43.297106 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:43.357007 kubelet[1969]: I1213 14:11:43.356974 1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c87b34-7116-4650-a4b2-e3e247fc57f2" path="/var/lib/kubelet/pods/a6c87b34-7116-4650-a4b2-e3e247fc57f2/volumes" Dec 13 14:11:43.486841 env[1449]: time="2024-12-13T14:11:43.486801537Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:11:43.515345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783573461.mount: Deactivated successfully. Dec 13 14:11:43.531607 env[1449]: time="2024-12-13T14:11:43.531559094Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc\"" Dec 13 14:11:43.532229 env[1449]: time="2024-12-13T14:11:43.532198973Z" level=info msg="StartContainer for \"36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc\"" Dec 13 14:11:43.549888 systemd[1]: Started cri-containerd-36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc.scope. Dec 13 14:11:43.578891 systemd[1]: cri-containerd-36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc.scope: Deactivated successfully. Dec 13 14:11:43.583735 env[1449]: time="2024-12-13T14:11:43.583413398Z" level=info msg="StartContainer for \"36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc\" returns successfully" Dec 13 14:11:43.611148 env[1449]: time="2024-12-13T14:11:43.611101306Z" level=info msg="shim disconnected" id=36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc Dec 13 14:11:43.611148 env[1449]: time="2024-12-13T14:11:43.611148466Z" level=warning msg="cleaning up after shim disconnected" id=36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc namespace=k8s.io Dec 13 14:11:43.611372 env[1449]: time="2024-12-13T14:11:43.611159346Z" level=info msg="cleaning up dead shim" Dec 13 14:11:43.617978 env[1449]: time="2024-12-13T14:11:43.617941334Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3912 runtime=io.containerd.runc.v2\n" Dec 13 14:11:44.297785 kubelet[1969]: E1213 14:11:44.297747 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:44.495285 env[1449]: time="2024-12-13T14:11:44.495235918Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:11:44.521949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053619771.mount: Deactivated successfully. Dec 13 14:11:44.527191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910314177.mount: Deactivated successfully. Dec 13 14:11:44.539697 env[1449]: time="2024-12-13T14:11:44.539659837Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1\"" Dec 13 14:11:44.540549 env[1449]: time="2024-12-13T14:11:44.540525555Z" level=info msg="StartContainer for \"b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1\"" Dec 13 14:11:44.556934 systemd[1]: Started cri-containerd-b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1.scope. Dec 13 14:11:44.579202 systemd[1]: cri-containerd-b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1.scope: Deactivated successfully. Dec 13 14:11:44.584946 env[1449]: time="2024-12-13T14:11:44.584902714Z" level=info msg="StartContainer for \"b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1\" returns successfully" Dec 13 14:11:44.612385 env[1449]: time="2024-12-13T14:11:44.612334504Z" level=info msg="shim disconnected" id=b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1 Dec 13 14:11:44.612385 env[1449]: time="2024-12-13T14:11:44.612381464Z" level=warning msg="cleaning up after shim disconnected" id=b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1 namespace=k8s.io Dec 13 14:11:44.612385 env[1449]: time="2024-12-13T14:11:44.612390104Z" level=info msg="cleaning up dead shim" Dec 13 14:11:44.619042 env[1449]: time="2024-12-13T14:11:44.618998132Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3968 runtime=io.containerd.runc.v2\n" Dec 13 14:11:45.298770 kubelet[1969]: E1213 14:11:45.298710 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:45.495492 env[1449]: time="2024-12-13T14:11:45.495438063Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:11:45.503242 kubelet[1969]: W1213 14:11:45.503201 1969 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e4c046_7356_4fd4_979b_37f247599ba0.slice/cri-containerd-23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c.scope WatchSource:0}: task 23498f257895b2346cb2b9e70cdd78a1232ca222130d72be8a0e5f72973cf31c not found: not found Dec 13 14:11:45.523741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654478835.mount: Deactivated successfully. Dec 13 14:11:45.527682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210435919.mount: Deactivated successfully. Dec 13 14:11:45.544830 env[1449]: time="2024-12-13T14:11:45.544759654Z" level=info msg="CreateContainer within sandbox \"db9c110b08611e2338686cd39d6b5e660a06eb955e8fd92d08253c9ad6ff92c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0\"" Dec 13 14:11:45.545292 env[1449]: time="2024-12-13T14:11:45.545267613Z" level=info msg="StartContainer for \"e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0\"" Dec 13 14:11:45.561361 systemd[1]: Started cri-containerd-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0.scope. Dec 13 14:11:45.595046 env[1449]: time="2024-12-13T14:11:45.594984044Z" level=info msg="StartContainer for \"e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0\" returns successfully" Dec 13 14:11:45.902474 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:11:46.299577 kubelet[1969]: E1213 14:11:46.299522 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:46.654839 systemd[1]: run-containerd-runc-k8s.io-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0-runc.8eq0G0.mount: Deactivated successfully. Dec 13 14:11:47.299668 kubelet[1969]: E1213 14:11:47.299626 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:48.300733 kubelet[1969]: E1213 14:11:48.300701 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:48.526065 systemd-networkd[1601]: lxc_health: Link UP Dec 13 14:11:48.543774 systemd-networkd[1601]: lxc_health: Gained carrier Dec 13 14:11:48.544515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:11:48.611661 kubelet[1969]: W1213 14:11:48.611539 1969 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e4c046_7356_4fd4_979b_37f247599ba0.slice/cri-containerd-c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b.scope WatchSource:0}: task c9db78c580f6b136f612ed16592d8674d71c131efd9be8c57303794a20439b9b not found: not found Dec 13 14:11:48.785928 systemd[1]: run-containerd-runc-k8s.io-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0-runc.wuEpN4.mount: Deactivated successfully. Dec 13 14:11:49.301913 kubelet[1969]: E1213 14:11:49.301870 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:49.853312 kubelet[1969]: I1213 14:11:49.852774 1969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xhskk" podStartSLOduration=8.852676945 podStartE2EDuration="8.852676945s" podCreationTimestamp="2024-12-13 14:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:11:46.515020441 +0000 UTC m=+69.971357682" watchObservedRunningTime="2024-12-13 14:11:49.852676945 +0000 UTC m=+73.309014146" Dec 13 14:11:49.974649 systemd-networkd[1601]: lxc_health: Gained IPv6LL Dec 13 14:11:50.302963 kubelet[1969]: E1213 14:11:50.302936 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:50.960218 systemd[1]: run-containerd-runc-k8s.io-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0-runc.FgsosS.mount: Deactivated successfully. Dec 13 14:11:51.304350 kubelet[1969]: E1213 14:11:51.304303 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:51.719967 kubelet[1969]: W1213 14:11:51.719871 1969 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e4c046_7356_4fd4_979b_37f247599ba0.slice/cri-containerd-36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc.scope WatchSource:0}: task 36d5611c9767f5115ffa0015b09681b58247913a39a1138cce81a95440188efc not found: not found Dec 13 14:11:52.305090 kubelet[1969]: E1213 14:11:52.305036 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:53.086867 systemd[1]: run-containerd-runc-k8s.io-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0-runc.woz3sy.mount: Deactivated successfully. Dec 13 14:11:53.305579 kubelet[1969]: E1213 14:11:53.305512 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:54.306433 kubelet[1969]: E1213 14:11:54.306393 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:54.827990 kubelet[1969]: W1213 14:11:54.827953 1969 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e4c046_7356_4fd4_979b_37f247599ba0.slice/cri-containerd-b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1.scope WatchSource:0}: task b985326e23a8bcab1addd11b494bfbbf55fde49dbed86f5bd9771a83c137c3d1 not found: not found Dec 13 14:11:55.198988 systemd[1]: run-containerd-runc-k8s.io-e2a907d01c15b37689836ff61db01a38d7753623d952513fdb24f192a3cd29d0-runc.ErDCBa.mount: Deactivated successfully. Dec 13 14:11:55.307512 kubelet[1969]: E1213 14:11:55.307478 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:56.307837 kubelet[1969]: E1213 14:11:56.307774 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:57.251669 kubelet[1969]: E1213 14:11:57.251642 1969 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:57.307905 kubelet[1969]: E1213 14:11:57.307883 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:58.308658 kubelet[1969]: E1213 14:11:58.308627 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:11:59.309212 kubelet[1969]: E1213 14:11:59.309177 1969 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"