Feb 9 10:01:11.004543 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 10:01:11.004562 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 10:01:11.004569 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 10:01:11.004577 kernel: printk: bootconsole [pl11] enabled Feb 9 10:01:11.004582 kernel: efi: EFI v2.70 by EDK II Feb 9 10:01:11.004587 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 10:01:11.004593 kernel: random: crng init done Feb 9 10:01:11.004599 kernel: ACPI: Early table checksum verification disabled Feb 9 10:01:11.009638 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 10:01:11.009657 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009663 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009674 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 10:01:11.009679 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009685 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009691 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009697 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009703 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009710 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009716 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 10:01:11.009722 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 10:01:11.009727 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 10:01:11.009733 kernel: NUMA: Failed to initialise from firmware Feb 9 10:01:11.009739 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 10:01:11.009751 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 10:01:11.009758 kernel: Zone ranges: Feb 9 10:01:11.009764 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 10:01:11.009770 kernel: DMA32 empty Feb 9 10:01:11.009777 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 10:01:11.009783 kernel: Movable zone start for each node Feb 9 10:01:11.009788 kernel: Early memory node ranges Feb 9 10:01:11.009794 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 10:01:11.009800 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 10:01:11.009805 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 10:01:11.009814 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 10:01:11.009821 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 10:01:11.009827 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 10:01:11.009832 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 10:01:11.009838 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 10:01:11.009843 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 10:01:11.009851 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 10:01:11.009860 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 10:01:11.009868 kernel: psci: probing for conduit method from ACPI. Feb 9 10:01:11.009874 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 10:01:11.009880 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 10:01:11.009888 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 10:01:11.009894 kernel: psci: SMC Calling Convention v1.4 Feb 9 10:01:11.009900 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 10:01:11.009906 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 10:01:11.009912 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 10:01:11.009921 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 10:01:11.009928 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 10:01:11.009934 kernel: Detected PIPT I-cache on CPU0 Feb 9 10:01:11.009940 kernel: CPU features: detected: GIC system register CPU interface Feb 9 10:01:11.009946 kernel: CPU features: detected: Hardware dirty bit management Feb 9 10:01:11.009952 kernel: CPU features: detected: Spectre-BHB Feb 9 10:01:11.009958 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 10:01:11.009970 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 10:01:11.009976 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 10:01:11.009982 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 10:01:11.009988 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 10:01:11.009994 kernel: Policy zone: Normal Feb 9 10:01:11.010002 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:01:11.010008 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 10:01:11.010015 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 10:01:11.010021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 10:01:11.010027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 10:01:11.010036 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 10:01:11.010044 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 10:01:11.010050 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 10:01:11.010056 kernel: trace event string verifier disabled Feb 9 10:01:11.010062 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 10:01:11.010069 kernel: rcu: RCU event tracing is enabled. Feb 9 10:01:11.010075 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 10:01:11.010083 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 10:01:11.010091 kernel: Tracing variant of Tasks RCU enabled. Feb 9 10:01:11.010097 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 10:01:11.010103 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 10:01:11.010110 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 10:01:11.010116 kernel: GICv3: 960 SPIs implemented Feb 9 10:01:11.010122 kernel: GICv3: 0 Extended SPIs implemented Feb 9 10:01:11.010128 kernel: GICv3: Distributor has no Range Selector support Feb 9 10:01:11.010137 kernel: Root IRQ handler: gic_handle_irq Feb 9 10:01:11.010143 kernel: GICv3: 16 PPIs implemented Feb 9 10:01:11.010149 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 10:01:11.010155 kernel: ITS: No ITS available, not enabling LPIs Feb 9 10:01:11.010162 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:01:11.010168 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 10:01:11.010174 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 10:01:11.010182 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 10:01:11.010191 kernel: Console: colour dummy device 80x25 Feb 9 10:01:11.010197 kernel: printk: console [tty1] enabled Feb 9 10:01:11.010204 kernel: ACPI: Core revision 20210730 Feb 9 10:01:11.010210 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 10:01:11.010216 kernel: pid_max: default: 32768 minimum: 301 Feb 9 10:01:11.010223 kernel: LSM: Security Framework initializing Feb 9 10:01:11.010229 kernel: SELinux: Initializing. Feb 9 10:01:11.010235 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:01:11.010245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:01:11.010253 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 10:01:11.010259 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 10:01:11.010265 kernel: rcu: Hierarchical SRCU implementation. Feb 9 10:01:11.010271 kernel: Remapping and enabling EFI services. Feb 9 10:01:11.010277 kernel: smp: Bringing up secondary CPUs ... Feb 9 10:01:11.010284 kernel: Detected PIPT I-cache on CPU1 Feb 9 10:01:11.010290 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 10:01:11.010297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:01:11.010303 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 10:01:11.010311 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 10:01:11.010320 kernel: SMP: Total of 2 processors activated. Feb 9 10:01:11.010326 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 10:01:11.010333 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 10:01:11.010339 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 10:01:11.010346 kernel: CPU features: detected: CRC32 instructions Feb 9 10:01:11.010352 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 10:01:11.010359 kernel: CPU features: detected: LSE atomic instructions Feb 9 10:01:11.010365 kernel: CPU features: detected: Privileged Access Never Feb 9 10:01:11.010375 kernel: CPU: All CPU(s) started at EL1 Feb 9 10:01:11.010382 kernel: alternatives: patching kernel code Feb 9 10:01:11.010393 kernel: devtmpfs: initialized Feb 9 10:01:11.010401 kernel: KASLR enabled Feb 9 10:01:11.010408 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 10:01:11.010415 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 10:01:11.010421 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 10:01:11.010430 kernel: SMBIOS 3.1.0 present. Feb 9 10:01:11.010437 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 10:01:11.010444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 10:01:11.010452 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 10:01:11.010459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 10:01:11.010466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 10:01:11.010472 kernel: audit: initializing netlink subsys (disabled) Feb 9 10:01:11.010482 kernel: audit: type=2000 audit(0.085:1): state=initialized audit_enabled=0 res=1 Feb 9 10:01:11.010489 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 10:01:11.010495 kernel: cpuidle: using governor menu Feb 9 10:01:11.010503 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 10:01:11.010510 kernel: ASID allocator initialised with 32768 entries Feb 9 10:01:11.010517 kernel: ACPI: bus type PCI registered Feb 9 10:01:11.010523 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 10:01:11.010533 kernel: Serial: AMBA PL011 UART driver Feb 9 10:01:11.010540 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 10:01:11.010547 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 10:01:11.010553 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 10:01:11.010560 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 10:01:11.010568 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 10:01:11.010578 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 10:01:11.010584 kernel: ACPI: Added _OSI(Module Device) Feb 9 10:01:11.010591 kernel: ACPI: Added _OSI(Processor Device) Feb 9 10:01:11.010598 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 10:01:11.010614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 10:01:11.010626 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 10:01:11.010633 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 10:01:11.010640 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 10:01:11.010649 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 10:01:11.010656 kernel: ACPI: Interpreter enabled Feb 9 10:01:11.010665 kernel: ACPI: Using GIC for interrupt routing Feb 9 10:01:11.010672 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 10:01:11.010679 kernel: printk: console [ttyAMA0] enabled Feb 9 10:01:11.010685 kernel: printk: bootconsole [pl11] disabled Feb 9 10:01:11.010692 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 10:01:11.010698 kernel: iommu: Default domain type: Translated Feb 9 10:01:11.010705 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 10:01:11.010716 kernel: vgaarb: loaded Feb 9 10:01:11.010724 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 10:01:11.010730 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 10:01:11.010737 kernel: PTP clock support registered Feb 9 10:01:11.010743 kernel: Registered efivars operations Feb 9 10:01:11.010750 kernel: No ACPI PMU IRQ for CPU0 Feb 9 10:01:11.010760 kernel: No ACPI PMU IRQ for CPU1 Feb 9 10:01:11.010767 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 10:01:11.010773 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 10:01:11.010782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 10:01:11.010788 kernel: pnp: PnP ACPI init Feb 9 10:01:11.010798 kernel: pnp: PnP ACPI: found 0 devices Feb 9 10:01:11.010805 kernel: NET: Registered PF_INET protocol family Feb 9 10:01:11.010812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 10:01:11.010818 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 10:01:11.010825 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 10:01:11.010832 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 10:01:11.010839 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 10:01:11.010850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 10:01:11.010857 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:01:11.010864 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:01:11.010871 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 10:01:11.010877 kernel: PCI: CLS 0 bytes, default 64 Feb 9 10:01:11.010884 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 10:01:11.010890 kernel: kvm [1]: HYP mode not available Feb 9 10:01:11.010900 kernel: Initialise system trusted keyrings Feb 9 10:01:11.010907 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 10:01:11.010915 kernel: Key type asymmetric registered Feb 9 10:01:11.010922 kernel: Asymmetric key parser 'x509' registered Feb 9 10:01:11.010928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 10:01:11.010935 kernel: io scheduler mq-deadline registered Feb 9 10:01:11.010944 kernel: io scheduler kyber registered Feb 9 10:01:11.010951 kernel: io scheduler bfq registered Feb 9 10:01:11.010958 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 10:01:11.010964 kernel: thunder_xcv, ver 1.0 Feb 9 10:01:11.010971 kernel: thunder_bgx, ver 1.0 Feb 9 10:01:11.010979 kernel: nicpf, ver 1.0 Feb 9 10:01:11.010988 kernel: nicvf, ver 1.0 Feb 9 10:01:11.011134 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 10:01:11.011196 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T10:01:10 UTC (1707472870) Feb 9 10:01:11.011205 kernel: efifb: probing for efifb Feb 9 10:01:11.011212 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 10:01:11.011219 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 10:01:11.011225 kernel: efifb: scrolling: redraw Feb 9 10:01:11.011234 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 10:01:11.011241 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 10:01:11.011247 kernel: fb0: EFI VGA frame buffer device Feb 9 10:01:11.011254 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 10:01:11.011260 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 10:01:11.011267 kernel: NET: Registered PF_INET6 protocol family Feb 9 10:01:11.011274 kernel: Segment Routing with IPv6 Feb 9 10:01:11.011280 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 10:01:11.011287 kernel: NET: Registered PF_PACKET protocol family Feb 9 10:01:11.011295 kernel: Key type dns_resolver registered Feb 9 10:01:11.011302 kernel: registered taskstats version 1 Feb 9 10:01:11.011308 kernel: Loading compiled-in X.509 certificates Feb 9 10:01:11.011315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 10:01:11.011322 kernel: Key type .fscrypt registered Feb 9 10:01:11.011328 kernel: Key type fscrypt-provisioning registered Feb 9 10:01:11.011335 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 10:01:11.011341 kernel: ima: Allocated hash algorithm: sha1 Feb 9 10:01:11.011348 kernel: ima: No architecture policies found Feb 9 10:01:11.011356 kernel: Freeing unused kernel memory: 34688K Feb 9 10:01:11.011362 kernel: Run /init as init process Feb 9 10:01:11.011369 kernel: with arguments: Feb 9 10:01:11.011375 kernel: /init Feb 9 10:01:11.011382 kernel: with environment: Feb 9 10:01:11.011388 kernel: HOME=/ Feb 9 10:01:11.011394 kernel: TERM=linux Feb 9 10:01:11.011401 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 10:01:11.011409 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:01:11.011420 systemd[1]: Detected virtualization microsoft. Feb 9 10:01:11.011427 systemd[1]: Detected architecture arm64. Feb 9 10:01:11.011434 systemd[1]: Running in initrd. Feb 9 10:01:11.011441 systemd[1]: No hostname configured, using default hostname. Feb 9 10:01:11.011447 systemd[1]: Hostname set to . Feb 9 10:01:11.011455 systemd[1]: Initializing machine ID from random generator. Feb 9 10:01:11.011462 systemd[1]: Queued start job for default target initrd.target. Feb 9 10:01:11.011470 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:01:11.011478 systemd[1]: Reached target cryptsetup.target. Feb 9 10:01:11.011484 systemd[1]: Reached target paths.target. Feb 9 10:01:11.011491 systemd[1]: Reached target slices.target. Feb 9 10:01:11.011498 systemd[1]: Reached target swap.target. Feb 9 10:01:11.011505 systemd[1]: Reached target timers.target. Feb 9 10:01:11.011513 systemd[1]: Listening on iscsid.socket. Feb 9 10:01:11.011520 systemd[1]: Listening on iscsiuio.socket. Feb 9 10:01:11.011528 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 10:01:11.011535 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 10:01:11.011542 systemd[1]: Listening on systemd-journald.socket. Feb 9 10:01:11.011549 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:01:11.011557 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:01:11.011564 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:01:11.011571 systemd[1]: Reached target sockets.target. Feb 9 10:01:11.011577 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:01:11.011584 systemd[1]: Finished network-cleanup.service. Feb 9 10:01:11.011593 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 10:01:11.011600 systemd[1]: Starting systemd-journald.service... Feb 9 10:01:11.011642 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:01:11.011650 systemd[1]: Starting systemd-resolved.service... Feb 9 10:01:11.011657 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 10:01:11.011668 systemd-journald[276]: Journal started Feb 9 10:01:11.011711 systemd-journald[276]: Runtime Journal (/run/log/journal/410cdfb8bba449989f2a382034091640) is 8.0M, max 78.6M, 70.6M free. Feb 9 10:01:10.994652 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 10:01:11.028527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 10:01:11.036426 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 10:01:11.048460 kernel: Bridge firewalling registered Feb 9 10:01:11.048481 systemd[1]: Started systemd-journald.service. Feb 9 10:01:11.045421 systemd-resolved[278]: Positive Trust Anchors: Feb 9 10:01:11.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.045429 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:01:11.113692 kernel: audit: type=1130 audit(1707472871.053:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.113717 kernel: SCSI subsystem initialized Feb 9 10:01:11.113726 kernel: audit: type=1130 audit(1707472871.085:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.045457 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:01:11.190490 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 10:01:11.190514 kernel: audit: type=1130 audit(1707472871.130:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.190525 kernel: device-mapper: uevent: version 1.0.3 Feb 9 10:01:11.190533 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 10:01:11.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.047507 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 10:01:11.215846 kernel: audit: type=1130 audit(1707472871.194:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.070754 systemd[1]: Started systemd-resolved.service. Feb 9 10:01:11.241223 kernel: audit: type=1130 audit(1707472871.220:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.086138 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:01:11.130927 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 10:01:11.211336 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 10:01:11.220727 systemd[1]: Reached target nss-lookup.target. Feb 9 10:01:11.239331 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 10:01:11.249680 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 10:01:11.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.254932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:01:11.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.278469 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:01:11.353069 kernel: audit: type=1130 audit(1707472871.287:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.353091 kernel: audit: type=1130 audit(1707472871.308:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.353104 kernel: audit: type=1130 audit(1707472871.333:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.287834 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 10:01:11.308498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:01:11.353341 systemd[1]: Starting dracut-cmdline.service... Feb 9 10:01:11.360551 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:01:11.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.404268 dracut-cmdline[295]: dracut-dracut-053 Feb 9 10:01:11.404268 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:01:11.382378 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:01:11.447985 kernel: audit: type=1130 audit(1707472871.386:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.448010 kernel: Loading iSCSI transport class v2.0-870. Feb 9 10:01:11.463201 kernel: iscsi: registered transport (tcp) Feb 9 10:01:11.481488 kernel: iscsi: registered transport (qla4xxx) Feb 9 10:01:11.481540 kernel: QLogic iSCSI HBA Driver Feb 9 10:01:11.515956 systemd[1]: Finished dracut-cmdline.service. Feb 9 10:01:11.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.521196 systemd[1]: Starting dracut-pre-udev.service... Feb 9 10:01:11.573626 kernel: raid6: neonx8 gen() 13817 MB/s Feb 9 10:01:11.593629 kernel: raid6: neonx8 xor() 10823 MB/s Feb 9 10:01:11.613616 kernel: raid6: neonx4 gen() 13562 MB/s Feb 9 10:01:11.634616 kernel: raid6: neonx4 xor() 11307 MB/s Feb 9 10:01:11.654618 kernel: raid6: neonx2 gen() 12977 MB/s Feb 9 10:01:11.674624 kernel: raid6: neonx2 xor() 10234 MB/s Feb 9 10:01:11.695615 kernel: raid6: neonx1 gen() 10497 MB/s Feb 9 10:01:11.715615 kernel: raid6: neonx1 xor() 8789 MB/s Feb 9 10:01:11.735613 kernel: raid6: int64x8 gen() 6298 MB/s Feb 9 10:01:11.756620 kernel: raid6: int64x8 xor() 3549 MB/s Feb 9 10:01:11.776614 kernel: raid6: int64x4 gen() 7259 MB/s Feb 9 10:01:11.797613 kernel: raid6: int64x4 xor() 3854 MB/s Feb 9 10:01:11.818616 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 10:01:11.838613 kernel: raid6: int64x2 xor() 3325 MB/s Feb 9 10:01:11.858614 kernel: raid6: int64x1 gen() 5047 MB/s Feb 9 10:01:11.884001 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 10:01:11.884010 kernel: raid6: using algorithm neonx8 gen() 13817 MB/s Feb 9 10:01:11.884019 kernel: raid6: .... xor() 10823 MB/s, rmw enabled Feb 9 10:01:11.888564 kernel: raid6: using neon recovery algorithm Feb 9 10:01:11.905617 kernel: xor: measuring software checksum speed Feb 9 10:01:11.913986 kernel: 8regs : 17297 MB/sec Feb 9 10:01:11.913996 kernel: 32regs : 20765 MB/sec Feb 9 10:01:11.918056 kernel: arm64_neon : 27939 MB/sec Feb 9 10:01:11.918074 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 9 10:01:11.977620 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 10:01:11.986658 systemd[1]: Finished dracut-pre-udev.service. Feb 9 10:01:11.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:11.994000 audit: BPF prog-id=7 op=LOAD Feb 9 10:01:11.994000 audit: BPF prog-id=8 op=LOAD Feb 9 10:01:11.995552 systemd[1]: Starting systemd-udevd.service... Feb 9 10:01:12.009827 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 10:01:12.015808 systemd[1]: Started systemd-udevd.service. Feb 9 10:01:12.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.026089 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 10:01:12.042500 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 10:01:12.070797 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 10:01:12.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.076351 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:01:12.115917 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:01:12.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.179948 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 10:01:12.188639 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 10:01:12.212630 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 10:01:12.212679 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 10:01:12.216657 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 10:01:12.226902 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 10:01:12.242742 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 10:01:12.242788 kernel: scsi host0: storvsc_host_t Feb 9 10:01:12.242947 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 10:01:12.243025 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 10:01:12.252782 kernel: scsi host1: storvsc_host_t Feb 9 10:01:12.252858 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 10:01:12.285232 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 10:01:12.285455 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 10:01:12.292625 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 10:01:12.292795 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 10:01:12.297080 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 10:01:12.301463 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 10:01:12.308422 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 10:01:12.308558 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 10:01:12.319315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 10:01:12.319353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 10:01:12.339632 kernel: hv_netvsc 0022487c-62a5-0022-487c-62a50022487c eth0: VF slot 1 added Feb 9 10:01:12.348644 kernel: hv_vmbus: registering driver hv_pci Feb 9 10:01:12.359921 kernel: hv_pci d7400242-6428-4d47-b0d9-fb7346279481: PCI VMBus probing: Using version 0x10004 Feb 9 10:01:12.373468 kernel: hv_pci d7400242-6428-4d47-b0d9-fb7346279481: PCI host bridge to bus 6428:00 Feb 9 10:01:12.373659 kernel: pci_bus 6428:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 10:01:12.380165 kernel: pci_bus 6428:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 10:01:12.387681 kernel: pci 6428:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 10:01:12.401131 kernel: pci 6428:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 10:01:12.421812 kernel: pci 6428:00:02.0: enabling Extended Tags Feb 9 10:01:12.440620 kernel: pci 6428:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6428:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 10:01:12.453553 kernel: pci_bus 6428:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 10:01:12.453726 kernel: pci 6428:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 10:01:12.492623 kernel: mlx5_core 6428:00:02.0: firmware version: 16.30.1284 Feb 9 10:01:12.647753 kernel: mlx5_core 6428:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 10:01:12.706847 kernel: hv_netvsc 0022487c-62a5-0022-487c-62a50022487c eth0: VF registering: eth1 Feb 9 10:01:12.707039 kernel: mlx5_core 6428:00:02.0 eth1: joined to eth0 Feb 9 10:01:12.717630 kernel: mlx5_core 6428:00:02.0 enP25640s1: renamed from eth1 Feb 9 10:01:12.745993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 10:01:12.826630 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (528) Feb 9 10:01:12.839114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:01:12.986818 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 10:01:13.008745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 10:01:13.014985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 10:01:13.029434 systemd[1]: Starting disk-uuid.service... Feb 9 10:01:14.063348 disk-uuid[601]: The operation has completed successfully. Feb 9 10:01:14.068585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 10:01:14.120527 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 10:01:14.121826 systemd[1]: Finished disk-uuid.service. Feb 9 10:01:14.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.134431 systemd[1]: Starting verity-setup.service... Feb 9 10:01:14.177706 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 10:01:14.439820 systemd[1]: Found device dev-mapper-usr.device. Feb 9 10:01:14.446064 systemd[1]: Mounting sysusr-usr.mount... Feb 9 10:01:14.455376 systemd[1]: Finished verity-setup.service. Feb 9 10:01:14.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.516641 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 10:01:14.516936 systemd[1]: Mounted sysusr-usr.mount. Feb 9 10:01:14.524323 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 10:01:14.525169 systemd[1]: Starting ignition-setup.service... Feb 9 10:01:14.536775 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 10:01:14.562543 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:01:14.562568 kernel: BTRFS info (device sda6): using free space tree Feb 9 10:01:14.567334 kernel: BTRFS info (device sda6): has skinny extents Feb 9 10:01:14.611191 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 10:01:14.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.619000 audit: BPF prog-id=9 op=LOAD Feb 9 10:01:14.620741 systemd[1]: Starting systemd-networkd.service... Feb 9 10:01:14.642451 systemd-networkd[871]: lo: Link UP Feb 9 10:01:14.642465 systemd-networkd[871]: lo: Gained carrier Feb 9 10:01:14.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.642884 systemd-networkd[871]: Enumeration completed Feb 9 10:01:14.646079 systemd[1]: Started systemd-networkd.service. Feb 9 10:01:14.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.650809 systemd[1]: Reached target network.target. Feb 9 10:01:14.656052 systemd[1]: Starting iscsiuio.service... Feb 9 10:01:14.666345 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 10:01:14.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.706434 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:01:14.706434 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 10:01:14.706434 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 10:01:14.706434 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 10:01:14.706434 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 10:01:14.706434 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:01:14.706434 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 10:01:14.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.666726 systemd[1]: Started iscsiuio.service. Feb 9 10:01:14.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.669880 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:01:14.675508 systemd[1]: Starting iscsid.service... Feb 9 10:01:14.692379 systemd[1]: Started iscsid.service. Feb 9 10:01:14.701910 systemd[1]: Starting dracut-initqueue.service... Feb 9 10:01:14.712408 systemd[1]: Finished dracut-initqueue.service. Feb 9 10:01:14.719329 systemd[1]: Reached target remote-fs-pre.target. Feb 9 10:01:14.742381 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:01:14.761136 systemd[1]: Reached target remote-fs.target. Feb 9 10:01:14.770798 systemd[1]: Starting dracut-pre-mount.service... Feb 9 10:01:14.798961 systemd[1]: Finished dracut-pre-mount.service. Feb 9 10:01:14.876205 systemd[1]: Finished ignition-setup.service. Feb 9 10:01:14.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:14.882548 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 10:01:14.900625 kernel: mlx5_core 6428:00:02.0 enP25640s1: Link up Feb 9 10:01:14.943643 kernel: hv_netvsc 0022487c-62a5-0022-487c-62a50022487c eth0: Data path switched to VF: enP25640s1 Feb 9 10:01:14.949978 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:01:14.949642 systemd-networkd[871]: enP25640s1: Link UP Feb 9 10:01:14.949725 systemd-networkd[871]: eth0: Link UP Feb 9 10:01:14.949843 systemd-networkd[871]: eth0: Gained carrier Feb 9 10:01:14.961830 systemd-networkd[871]: enP25640s1: Gained carrier Feb 9 10:01:14.974682 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 10:01:16.673745 systemd-networkd[871]: eth0: Gained IPv6LL Feb 9 10:01:18.422114 ignition[896]: Ignition 2.14.0 Feb 9 10:01:18.422126 ignition[896]: Stage: fetch-offline Feb 9 10:01:18.422181 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:18.422204 ignition[896]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:18.531266 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:18.531402 ignition[896]: parsed url from cmdline: "" Feb 9 10:01:18.531406 ignition[896]: no config URL provided Feb 9 10:01:18.531411 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:01:18.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.538858 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 10:01:18.581069 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 10:01:18.581103 kernel: audit: type=1130 audit(1707472878.546:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.531419 ignition[896]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:01:18.557021 systemd[1]: Starting ignition-fetch.service... Feb 9 10:01:18.531424 ignition[896]: failed to fetch config: resource requires networking Feb 9 10:01:18.531672 ignition[896]: Ignition finished successfully Feb 9 10:01:18.563859 ignition[902]: Ignition 2.14.0 Feb 9 10:01:18.563865 ignition[902]: Stage: fetch Feb 9 10:01:18.563959 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:18.563976 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:18.566447 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:18.566551 ignition[902]: parsed url from cmdline: "" Feb 9 10:01:18.566554 ignition[902]: no config URL provided Feb 9 10:01:18.566559 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:01:18.566566 ignition[902]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:01:18.566601 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 10:01:18.666844 ignition[902]: GET result: OK Feb 9 10:01:18.666943 ignition[902]: config has been read from IMDS userdata Feb 9 10:01:18.666976 ignition[902]: parsing config with SHA512: da977588aae296b77236205e457c684f353b57a8ce282ad439ea9425f62e7feb90c4251638f8891138c1b20d810a25146f994e80e7a150bd53ca57052d2b8402 Feb 9 10:01:18.680748 unknown[902]: fetched base config from "system" Feb 9 10:01:18.681205 ignition[902]: fetch: fetch complete Feb 9 10:01:18.680757 unknown[902]: fetched base config from "system" Feb 9 10:01:18.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.681211 ignition[902]: fetch: fetch passed Feb 9 10:01:18.717515 kernel: audit: type=1130 audit(1707472878.693:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.680763 unknown[902]: fetched user config from "azure" Feb 9 10:01:18.681249 ignition[902]: Ignition finished successfully Feb 9 10:01:18.686711 systemd[1]: Finished ignition-fetch.service. Feb 9 10:01:18.725177 ignition[908]: Ignition 2.14.0 Feb 9 10:01:18.694287 systemd[1]: Starting ignition-kargs.service... Feb 9 10:01:18.725183 ignition[908]: Stage: kargs Feb 9 10:01:18.725287 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:18.725305 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:18.727982 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:18.747285 ignition[908]: kargs: kargs passed Feb 9 10:01:18.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.749885 systemd[1]: Finished ignition-kargs.service. Feb 9 10:01:18.747350 ignition[908]: Ignition finished successfully Feb 9 10:01:18.790982 kernel: audit: type=1130 audit(1707472878.759:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.779169 systemd[1]: Starting ignition-disks.service... Feb 9 10:01:18.817721 kernel: audit: type=1130 audit(1707472878.799:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.786018 ignition[914]: Ignition 2.14.0 Feb 9 10:01:18.795026 systemd[1]: Finished ignition-disks.service. Feb 9 10:01:18.786024 ignition[914]: Stage: disks Feb 9 10:01:18.799575 systemd[1]: Reached target initrd-root-device.target. Feb 9 10:01:18.786127 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:18.822434 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:01:18.786144 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:18.830321 systemd[1]: Reached target local-fs.target. Feb 9 10:01:18.789708 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:18.838810 systemd[1]: Reached target sysinit.target. Feb 9 10:01:18.791369 ignition[914]: disks: disks passed Feb 9 10:01:18.845927 systemd[1]: Reached target basic.target. Feb 9 10:01:18.791434 ignition[914]: Ignition finished successfully Feb 9 10:01:18.855616 systemd[1]: Starting systemd-fsck-root.service... Feb 9 10:01:18.951935 systemd-fsck[922]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 10:01:18.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.959311 systemd[1]: Finished systemd-fsck-root.service. Feb 9 10:01:18.988948 kernel: audit: type=1130 audit(1707472878.963:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:18.965090 systemd[1]: Mounting sysroot.mount... Feb 9 10:01:19.008631 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 10:01:19.008784 systemd[1]: Mounted sysroot.mount. Feb 9 10:01:19.012591 systemd[1]: Reached target initrd-root-fs.target. Feb 9 10:01:19.061286 systemd[1]: Mounting sysroot-usr.mount... Feb 9 10:01:19.066022 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 10:01:19.078525 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 10:01:19.078573 systemd[1]: Reached target ignition-diskful.target. Feb 9 10:01:19.094050 systemd[1]: Mounted sysroot-usr.mount. Feb 9 10:01:19.135007 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:01:19.140139 systemd[1]: Starting initrd-setup-root.service... Feb 9 10:01:19.161655 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (933) Feb 9 10:01:19.172956 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:01:19.172980 kernel: BTRFS info (device sda6): using free space tree Feb 9 10:01:19.173102 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 10:01:19.184278 kernel: BTRFS info (device sda6): has skinny extents Feb 9 10:01:19.189642 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:01:19.210667 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Feb 9 10:01:19.219055 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 10:01:19.227590 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 10:01:19.673577 systemd[1]: Finished initrd-setup-root.service. Feb 9 10:01:19.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.679232 systemd[1]: Starting ignition-mount.service... Feb 9 10:01:19.711486 kernel: audit: type=1130 audit(1707472879.678:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.702588 systemd[1]: Starting sysroot-boot.service... Feb 9 10:01:19.707712 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 10:01:19.707833 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 10:01:19.742913 systemd[1]: Finished sysroot-boot.service. Feb 9 10:01:19.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.770286 kernel: audit: type=1130 audit(1707472879.747:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.774000 ignition[1001]: INFO : Ignition 2.14.0 Feb 9 10:01:19.774000 ignition[1001]: INFO : Stage: mount Feb 9 10:01:19.781861 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:19.781861 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:19.781861 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:19.830029 kernel: audit: type=1130 audit(1707472879.794:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:19.830099 ignition[1001]: INFO : mount: mount passed Feb 9 10:01:19.830099 ignition[1001]: INFO : Ignition finished successfully Feb 9 10:01:19.790388 systemd[1]: Finished ignition-mount.service. Feb 9 10:01:20.665420 coreos-metadata[932]: Feb 09 10:01:20.665 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 10:01:20.675316 coreos-metadata[932]: Feb 09 10:01:20.675 INFO Fetch successful Feb 9 10:01:20.708122 coreos-metadata[932]: Feb 09 10:01:20.708 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 10:01:20.721269 coreos-metadata[932]: Feb 09 10:01:20.721 INFO Fetch successful Feb 9 10:01:20.737499 coreos-metadata[932]: Feb 09 10:01:20.737 INFO wrote hostname ci-3510.3.2-a-c671677e8d to /sysroot/etc/hostname Feb 9 10:01:20.746910 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 10:01:20.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:20.752830 systemd[1]: Starting ignition-files.service... Feb 9 10:01:20.779632 kernel: audit: type=1130 audit(1707472880.751:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:20.778487 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:01:20.806147 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1011) Feb 9 10:01:20.806191 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:01:20.806201 kernel: BTRFS info (device sda6): using free space tree Feb 9 10:01:20.810909 kernel: BTRFS info (device sda6): has skinny extents Feb 9 10:01:20.820684 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:01:20.833921 ignition[1030]: INFO : Ignition 2.14.0 Feb 9 10:01:20.833921 ignition[1030]: INFO : Stage: files Feb 9 10:01:20.842852 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:20.842852 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:20.842852 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:20.842852 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Feb 9 10:01:20.875006 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 10:01:20.875006 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 10:01:20.972543 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 10:01:20.980254 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 10:01:20.995626 unknown[1030]: wrote ssh authorized keys file for user: core Feb 9 10:01:21.000964 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 10:01:21.011900 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:01:21.023145 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 10:01:21.517435 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 10:01:21.802776 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 10:01:21.819396 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:01:21.819396 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:01:21.819396 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 10:01:22.209284 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 10:01:22.348796 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 10:01:22.365105 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:01:22.365105 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:01:22.365105 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 10:01:22.837623 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 10:01:23.116969 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 10:01:23.133145 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:01:23.133145 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:01:23.133145 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 10:01:23.184492 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 10:01:23.792612 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 10:01:23.809936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 10:01:23.920072 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1035) Feb 9 10:01:23.868866 systemd[1]: mnt-oem789339473.mount: Deactivated successfully. Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789339473" Feb 9 10:01:23.925352 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789339473": device or resource busy Feb 9 10:01:23.925352 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem789339473", trying btrfs: device or resource busy Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789339473" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789339473" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem789339473" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem789339473" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3412085130" Feb 9 10:01:23.925352 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3412085130": device or resource busy Feb 9 10:01:23.925352 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3412085130", trying btrfs: device or resource busy Feb 9 10:01:23.925352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3412085130" Feb 9 10:01:24.181320 kernel: audit: type=1130 audit(1707472883.929:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.181347 kernel: audit: type=1130 audit(1707472884.009:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.181363 kernel: audit: type=1131 audit(1707472884.009:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.181373 kernel: audit: type=1130 audit(1707472884.075:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.909230 systemd[1]: mnt-oem3412085130.mount: Deactivated successfully. Feb 9 10:01:24.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.204979 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3412085130" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem3412085130" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem3412085130" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:01:24.204979 ignition[1030]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:01:24.525515 kernel: audit: type=1130 audit(1707472884.185:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.525544 kernel: audit: type=1131 audit(1707472884.206:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.525556 kernel: audit: type=1130 audit(1707472884.278:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.525565 kernel: audit: type=1131 audit(1707472884.365:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.920850 systemd[1]: Finished ignition-files.service. Feb 9 10:01:24.533986 ignition[1030]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:01:24.533986 ignition[1030]: INFO : files: files passed Feb 9 10:01:24.533986 ignition[1030]: INFO : Ignition finished successfully Feb 9 10:01:24.644595 kernel: audit: type=1131 audit(1707472884.569:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.644633 kernel: audit: type=1131 audit(1707472884.625:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.957899 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 10:01:24.652242 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 10:01:24.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.963015 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 10:01:24.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.963848 systemd[1]: Starting ignition-quench.service... Feb 9 10:01:24.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:23.994584 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 10:01:23.994714 systemd[1]: Finished ignition-quench.service. Feb 9 10:01:24.060842 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 10:01:24.709796 iscsid[880]: iscsid shutting down. Feb 9 10:01:24.104148 systemd[1]: Reached target ignition-complete.target. Feb 9 10:01:24.129244 systemd[1]: Starting initrd-parse-etc.service... Feb 9 10:01:24.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.728996 ignition[1068]: INFO : Ignition 2.14.0 Feb 9 10:01:24.728996 ignition[1068]: INFO : Stage: umount Feb 9 10:01:24.728996 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 10:01:24.728996 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 10:01:24.728996 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 10:01:24.728996 ignition[1068]: INFO : umount: umount passed Feb 9 10:01:24.728996 ignition[1068]: INFO : Ignition finished successfully Feb 9 10:01:24.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.171626 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 10:01:24.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.171746 systemd[1]: Finished initrd-parse-etc.service. Feb 9 10:01:24.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.207575 systemd[1]: Reached target initrd-fs.target. Feb 9 10:01:24.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.254951 systemd[1]: Reached target initrd.target. Feb 9 10:01:24.259360 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 10:01:24.260208 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 10:01:24.273726 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 10:01:24.296416 systemd[1]: Starting initrd-cleanup.service... Feb 9 10:01:24.321047 systemd[1]: Stopped target nss-lookup.target. Feb 9 10:01:24.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.332517 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 10:01:24.339774 systemd[1]: Stopped target timers.target. Feb 9 10:01:24.348793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 10:01:24.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.348947 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 10:01:24.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.390346 systemd[1]: Stopped target initrd.target. Feb 9 10:01:24.417782 systemd[1]: Stopped target basic.target. Feb 9 10:01:24.429051 systemd[1]: Stopped target ignition-complete.target. Feb 9 10:01:24.441108 systemd[1]: Stopped target ignition-diskful.target. Feb 9 10:01:24.456698 systemd[1]: Stopped target initrd-root-device.target. Feb 9 10:01:24.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.479406 systemd[1]: Stopped target remote-fs.target. Feb 9 10:01:24.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.956000 audit: BPF prog-id=6 op=UNLOAD Feb 9 10:01:24.493550 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 10:01:24.505679 systemd[1]: Stopped target sysinit.target. Feb 9 10:01:24.517799 systemd[1]: Stopped target local-fs.target. Feb 9 10:01:24.530302 systemd[1]: Stopped target local-fs-pre.target. Feb 9 10:01:24.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.538401 systemd[1]: Stopped target swap.target. Feb 9 10:01:24.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.550185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 10:01:25.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.550335 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 10:01:24.594051 systemd[1]: Stopped target cryptsetup.target. Feb 9 10:01:24.617157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 10:01:24.617303 systemd[1]: Stopped dracut-initqueue.service. Feb 9 10:01:25.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.647158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 10:01:24.647329 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 10:01:24.657621 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 10:01:25.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:25.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.657749 systemd[1]: Stopped ignition-files.service. Feb 9 10:01:25.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.670264 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 10:01:25.107009 kernel: hv_netvsc 0022487c-62a5-0022-487c-62a50022487c eth0: Data path switched from VF: enP25640s1 Feb 9 10:01:25.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:25.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:25.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.670396 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 10:01:24.689107 systemd[1]: Stopping ignition-mount.service... Feb 9 10:01:24.707482 systemd[1]: Stopping iscsid.service... Feb 9 10:01:24.716240 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 10:01:24.716389 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 10:01:24.725794 systemd[1]: Stopping sysroot-boot.service... Feb 9 10:01:24.740743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 10:01:24.742898 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 10:01:24.749222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 10:01:24.749314 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 10:01:24.761501 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 10:01:24.761602 systemd[1]: Stopped iscsid.service. Feb 9 10:01:24.776668 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 10:01:24.776740 systemd[1]: Stopped ignition-mount.service. Feb 9 10:01:24.788720 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 10:01:24.788819 systemd[1]: Stopped ignition-disks.service. Feb 9 10:01:24.797150 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 10:01:24.797190 systemd[1]: Stopped ignition-kargs.service. Feb 9 10:01:24.806869 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 10:01:25.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.806907 systemd[1]: Stopped ignition-fetch.service. Feb 9 10:01:24.816159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 10:01:24.816198 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 10:01:24.824791 systemd[1]: Stopped target paths.target. Feb 9 10:01:24.832575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 10:01:24.842711 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 10:01:24.847640 systemd[1]: Stopped target slices.target. Feb 9 10:01:24.851656 systemd[1]: Stopped target sockets.target. Feb 9 10:01:24.860167 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 10:01:24.860213 systemd[1]: Closed iscsid.socket. Feb 9 10:01:24.867510 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 10:01:24.867553 systemd[1]: Stopped ignition-setup.service. Feb 9 10:01:24.876214 systemd[1]: Stopping iscsiuio.service... Feb 9 10:01:24.888387 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 10:01:24.888485 systemd[1]: Stopped iscsiuio.service. Feb 9 10:01:24.895902 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 10:01:24.895978 systemd[1]: Finished initrd-cleanup.service. Feb 9 10:01:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.906314 systemd[1]: Stopped target network.target. Feb 9 10:01:24.913487 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 10:01:25.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:24.913524 systemd[1]: Closed iscsiuio.socket. Feb 9 10:01:24.921382 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:01:24.929317 systemd[1]: Stopping systemd-resolved.service... Feb 9 10:01:24.937300 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 9 10:01:25.315000 audit: BPF prog-id=9 op=UNLOAD Feb 9 10:01:24.938668 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:01:24.938768 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:01:24.948562 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 10:01:24.948738 systemd[1]: Stopped systemd-resolved.service. Feb 9 10:01:24.957602 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 10:01:24.957665 systemd[1]: Closed systemd-networkd.socket. Feb 9 10:01:24.966037 systemd[1]: Stopping network-cleanup.service... Feb 9 10:01:25.361589 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 10:01:24.977469 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 10:01:24.977541 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 10:01:24.986830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:01:24.986900 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:01:24.999767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 10:01:24.999822 systemd[1]: Stopped systemd-modules-load.service. Feb 9 10:01:25.004987 systemd[1]: Stopping systemd-udevd.service... Feb 9 10:01:25.015409 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 10:01:25.022401 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 10:01:25.022563 systemd[1]: Stopped systemd-udevd.service. Feb 9 10:01:25.031890 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 10:01:25.031933 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 10:01:25.040173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 10:01:25.040202 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 10:01:25.050758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 10:01:25.050802 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 10:01:25.059942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 10:01:25.059982 systemd[1]: Stopped dracut-cmdline.service. Feb 9 10:01:25.064466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 10:01:25.064505 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 10:01:25.077908 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 10:01:25.090272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 10:01:25.090338 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 10:01:25.101681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 10:01:25.101800 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 10:01:25.187048 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 10:01:25.187202 systemd[1]: Stopped network-cleanup.service. Feb 9 10:01:25.246552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 10:01:25.267594 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 10:01:25.267703 systemd[1]: Stopped sysroot-boot.service. Feb 9 10:01:25.277204 systemd[1]: Reached target initrd-switch-root.target. Feb 9 10:01:25.286643 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 10:01:25.286693 systemd[1]: Stopped initrd-setup-root.service. Feb 9 10:01:25.295727 systemd[1]: Starting initrd-switch-root.service... Feb 9 10:01:25.311469 systemd[1]: Switching root. Feb 9 10:01:25.362492 systemd-journald[276]: Journal stopped Feb 9 10:01:37.884619 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 10:01:37.884639 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 10:01:37.884650 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 10:01:37.884660 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 10:01:37.884668 kernel: SELinux: policy capability open_perms=1 Feb 9 10:01:37.884676 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 10:01:37.884685 kernel: SELinux: policy capability always_check_network=0 Feb 9 10:01:37.884693 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 10:01:37.884701 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 10:01:37.884709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 10:01:37.884719 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 10:01:37.884729 systemd[1]: Successfully loaded SELinux policy in 374.360ms. Feb 9 10:01:37.884739 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.562ms. Feb 9 10:01:37.884749 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:01:37.884760 systemd[1]: Detected virtualization microsoft. Feb 9 10:01:37.884769 systemd[1]: Detected architecture arm64. Feb 9 10:01:37.884778 systemd[1]: Detected first boot. Feb 9 10:01:37.884788 systemd[1]: Hostname set to . Feb 9 10:01:37.884796 systemd[1]: Initializing machine ID from random generator. Feb 9 10:01:37.884806 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 10:01:37.884814 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 9 10:01:37.884824 kernel: audit: type=1400 audit(1707472889.963:88): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:01:37.884836 kernel: audit: type=1300 audit(1707472889.963:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:37.884845 kernel: audit: type=1327 audit(1707472889.963:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:01:37.884855 kernel: audit: type=1400 audit(1707472889.976:89): avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:01:37.884864 kernel: audit: type=1300 audit(1707472889.976:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022105 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:37.884873 kernel: audit: type=1307 audit(1707472889.976:89): cwd="/" Feb 9 10:01:37.884883 kernel: audit: type=1302 audit(1707472889.976:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:37.884893 kernel: audit: type=1302 audit(1707472889.976:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:37.884902 kernel: audit: type=1327 audit(1707472889.976:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:01:37.884911 systemd[1]: Populated /etc with preset unit settings. Feb 9 10:01:37.884920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:01:37.884930 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:01:37.884941 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:01:37.884951 kernel: audit: type=1334 audit(1707472897.203:90): prog-id=12 op=LOAD Feb 9 10:01:37.884959 kernel: audit: type=1334 audit(1707472897.203:91): prog-id=3 op=UNLOAD Feb 9 10:01:37.884967 kernel: audit: type=1334 audit(1707472897.210:92): prog-id=13 op=LOAD Feb 9 10:01:37.884976 kernel: audit: type=1334 audit(1707472897.215:93): prog-id=14 op=LOAD Feb 9 10:01:37.884984 kernel: audit: type=1334 audit(1707472897.215:94): prog-id=4 op=UNLOAD Feb 9 10:01:37.884993 kernel: audit: type=1334 audit(1707472897.215:95): prog-id=5 op=UNLOAD Feb 9 10:01:37.885003 kernel: audit: type=1334 audit(1707472897.221:96): prog-id=15 op=LOAD Feb 9 10:01:37.885013 kernel: audit: type=1334 audit(1707472897.221:97): prog-id=12 op=UNLOAD Feb 9 10:01:37.885023 kernel: audit: type=1334 audit(1707472897.227:98): prog-id=16 op=LOAD Feb 9 10:01:37.885032 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 10:01:37.885041 kernel: audit: type=1334 audit(1707472897.233:99): prog-id=17 op=LOAD Feb 9 10:01:37.885050 systemd[1]: Stopped initrd-switch-root.service. Feb 9 10:01:37.885060 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 10:01:37.885069 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 10:01:37.885079 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 10:01:37.885089 systemd[1]: Created slice system-getty.slice. Feb 9 10:01:37.885098 systemd[1]: Created slice system-modprobe.slice. Feb 9 10:01:37.885108 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 10:01:37.885117 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 10:01:37.885129 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 10:01:37.885138 systemd[1]: Created slice user.slice. Feb 9 10:01:37.885147 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:01:37.885156 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 10:01:37.885165 systemd[1]: Set up automount boot.automount. Feb 9 10:01:37.885176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 10:01:37.885185 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 10:01:37.885194 systemd[1]: Stopped target initrd-fs.target. Feb 9 10:01:37.885203 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 10:01:37.885212 systemd[1]: Reached target integritysetup.target. Feb 9 10:01:37.885221 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:01:37.885231 systemd[1]: Reached target remote-fs.target. Feb 9 10:01:37.885240 systemd[1]: Reached target slices.target. Feb 9 10:01:37.885250 systemd[1]: Reached target swap.target. Feb 9 10:01:37.885259 systemd[1]: Reached target torcx.target. Feb 9 10:01:37.885268 systemd[1]: Reached target veritysetup.target. Feb 9 10:01:37.885278 systemd[1]: Listening on systemd-coredump.socket. Feb 9 10:01:37.885287 systemd[1]: Listening on systemd-initctl.socket. Feb 9 10:01:37.885296 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:01:37.885307 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:01:37.885317 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:01:37.885327 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 10:01:37.885336 systemd[1]: Mounting dev-hugepages.mount... Feb 9 10:01:37.885346 systemd[1]: Mounting dev-mqueue.mount... Feb 9 10:01:37.885355 systemd[1]: Mounting media.mount... Feb 9 10:01:37.885364 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 10:01:37.885374 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 10:01:37.885384 systemd[1]: Mounting tmp.mount... Feb 9 10:01:37.885394 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 10:01:37.885403 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 10:01:37.885413 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:01:37.885422 systemd[1]: Starting modprobe@configfs.service... Feb 9 10:01:37.885431 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 10:01:37.885440 systemd[1]: Starting modprobe@drm.service... Feb 9 10:01:37.885450 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 10:01:37.885459 systemd[1]: Starting modprobe@fuse.service... Feb 9 10:01:37.885469 systemd[1]: Starting modprobe@loop.service... Feb 9 10:01:37.885479 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 10:01:37.885489 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 10:01:37.885498 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 10:01:37.885508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 10:01:37.885517 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 10:01:37.885527 systemd[1]: Stopped systemd-journald.service. Feb 9 10:01:37.885536 systemd[1]: systemd-journald.service: Consumed 3.038s CPU time. Feb 9 10:01:37.885547 systemd[1]: Starting systemd-journald.service... Feb 9 10:01:37.885556 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:01:37.885565 kernel: loop: module loaded Feb 9 10:01:37.885574 systemd[1]: Starting systemd-network-generator.service... Feb 9 10:01:37.885583 systemd[1]: Starting systemd-remount-fs.service... Feb 9 10:01:37.885593 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:01:37.885602 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 10:01:37.885620 systemd[1]: Stopped verity-setup.service. Feb 9 10:01:37.885630 kernel: fuse: init (API version 7.34) Feb 9 10:01:37.885640 systemd[1]: Mounted dev-hugepages.mount. Feb 9 10:01:37.885650 systemd[1]: Mounted dev-mqueue.mount. Feb 9 10:01:37.885659 systemd[1]: Mounted media.mount. Feb 9 10:01:37.885668 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 10:01:37.885681 systemd-journald[1207]: Journal started Feb 9 10:01:37.885723 systemd-journald[1207]: Runtime Journal (/run/log/journal/9fe3af3aea2946eca972689872514a88) is 8.0M, max 78.6M, 70.6M free. Feb 9 10:01:27.881000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 10:01:28.629000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:01:28.629000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:01:28.629000 audit: BPF prog-id=10 op=LOAD Feb 9 10:01:28.629000 audit: BPF prog-id=10 op=UNLOAD Feb 9 10:01:28.630000 audit: BPF prog-id=11 op=LOAD Feb 9 10:01:28.630000 audit: BPF prog-id=11 op=UNLOAD Feb 9 10:01:29.963000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:01:29.963000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:29.963000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:01:29.976000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:01:29.976000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022105 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:29.976000 audit: CWD cwd="/" Feb 9 10:01:29.976000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:29.976000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:29.976000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:01:37.203000 audit: BPF prog-id=12 op=LOAD Feb 9 10:01:37.203000 audit: BPF prog-id=3 op=UNLOAD Feb 9 10:01:37.210000 audit: BPF prog-id=13 op=LOAD Feb 9 10:01:37.215000 audit: BPF prog-id=14 op=LOAD Feb 9 10:01:37.215000 audit: BPF prog-id=4 op=UNLOAD Feb 9 10:01:37.215000 audit: BPF prog-id=5 op=UNLOAD Feb 9 10:01:37.221000 audit: BPF prog-id=15 op=LOAD Feb 9 10:01:37.221000 audit: BPF prog-id=12 op=UNLOAD Feb 9 10:01:37.227000 audit: BPF prog-id=16 op=LOAD Feb 9 10:01:37.233000 audit: BPF prog-id=17 op=LOAD Feb 9 10:01:37.233000 audit: BPF prog-id=13 op=UNLOAD Feb 9 10:01:37.233000 audit: BPF prog-id=14 op=UNLOAD Feb 9 10:01:37.239000 audit: BPF prog-id=18 op=LOAD Feb 9 10:01:37.239000 audit: BPF prog-id=15 op=UNLOAD Feb 9 10:01:37.245000 audit: BPF prog-id=19 op=LOAD Feb 9 10:01:37.252000 audit: BPF prog-id=20 op=LOAD Feb 9 10:01:37.252000 audit: BPF prog-id=16 op=UNLOAD Feb 9 10:01:37.252000 audit: BPF prog-id=17 op=UNLOAD Feb 9 10:01:37.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.275000 audit: BPF prog-id=18 op=UNLOAD Feb 9 10:01:37.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.772000 audit: BPF prog-id=21 op=LOAD Feb 9 10:01:37.772000 audit: BPF prog-id=22 op=LOAD Feb 9 10:01:37.772000 audit: BPF prog-id=23 op=LOAD Feb 9 10:01:37.772000 audit: BPF prog-id=19 op=UNLOAD Feb 9 10:01:37.772000 audit: BPF prog-id=20 op=UNLOAD Feb 9 10:01:37.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.881000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 10:01:37.881000 audit[1207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffef563370 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:37.881000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 10:01:37.202597 systemd[1]: Queued start job for default target multi-user.target. Feb 9 10:01:29.912690 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:01:37.253145 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 10:01:29.946960 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:01:37.253518 systemd[1]: systemd-journald.service: Consumed 3.038s CPU time. Feb 9 10:01:29.946982 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:01:29.947021 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 10:01:29.947032 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 10:01:29.947068 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 10:01:29.947081 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 10:01:29.947284 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 10:01:29.947317 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:01:29.947328 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:01:29.947785 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 10:01:29.947821 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 10:01:29.947839 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 10:01:29.947862 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 10:01:29.947880 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 10:01:29.947895 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 10:01:35.990361 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:01:35.990648 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:01:35.990745 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:01:35.990902 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:01:35.990951 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 10:01:35.991004 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T10:01:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 10:01:37.901689 systemd[1]: Started systemd-journald.service. Feb 9 10:01:37.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.901757 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 10:01:37.906492 systemd[1]: Mounted tmp.mount. Feb 9 10:01:37.911473 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 10:01:37.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.918121 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:01:37.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.924932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 10:01:37.925076 systemd[1]: Finished modprobe@configfs.service. Feb 9 10:01:37.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.930501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 10:01:37.930734 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 10:01:37.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.935771 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 10:01:37.935905 systemd[1]: Finished modprobe@drm.service. Feb 9 10:01:37.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.940551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 10:01:37.940730 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 10:01:37.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.946026 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 10:01:37.946164 systemd[1]: Finished modprobe@fuse.service. Feb 9 10:01:37.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.951121 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 10:01:37.951271 systemd[1]: Finished modprobe@loop.service. Feb 9 10:01:37.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.956473 systemd[1]: Finished systemd-network-generator.service. Feb 9 10:01:37.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.962590 systemd[1]: Finished systemd-remount-fs.service. Feb 9 10:01:37.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.968183 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:01:37.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:37.974060 systemd[1]: Reached target network-pre.target. Feb 9 10:01:37.980215 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 10:01:37.985844 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 10:01:37.989964 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 10:01:37.991352 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 10:01:37.996785 systemd[1]: Starting systemd-journal-flush.service... Feb 9 10:01:38.001363 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 10:01:38.002424 systemd[1]: Starting systemd-random-seed.service... Feb 9 10:01:38.007024 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 10:01:38.008231 systemd[1]: Starting systemd-sysusers.service... Feb 9 10:01:38.013603 systemd[1]: Starting systemd-udev-settle.service... Feb 9 10:01:38.019772 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:01:38.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.025117 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 10:01:38.030332 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 10:01:38.035807 systemd-journald[1207]: Time spent on flushing to /var/log/journal/9fe3af3aea2946eca972689872514a88 is 15.285ms for 1111 entries. Feb 9 10:01:38.035807 systemd-journald[1207]: System Journal (/var/log/journal/9fe3af3aea2946eca972689872514a88) is 8.0M, max 2.6G, 2.6G free. Feb 9 10:01:38.110877 systemd-journald[1207]: Received client request to flush runtime journal. Feb 9 10:01:38.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.038335 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:01:38.114842 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 10:01:38.065763 systemd[1]: Finished systemd-random-seed.service. Feb 9 10:01:38.071231 systemd[1]: Reached target first-boot-complete.target. Feb 9 10:01:38.111836 systemd[1]: Finished systemd-journal-flush.service. Feb 9 10:01:38.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.133123 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:01:38.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.556986 systemd[1]: Finished systemd-sysusers.service. Feb 9 10:01:38.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.089179 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 10:01:39.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.094000 audit: BPF prog-id=24 op=LOAD Feb 9 10:01:39.094000 audit: BPF prog-id=25 op=LOAD Feb 9 10:01:39.094000 audit: BPF prog-id=7 op=UNLOAD Feb 9 10:01:39.094000 audit: BPF prog-id=8 op=UNLOAD Feb 9 10:01:39.095545 systemd[1]: Starting systemd-udevd.service... Feb 9 10:01:39.113936 systemd-udevd[1224]: Using default interface naming scheme 'v252'. Feb 9 10:01:39.345980 systemd[1]: Started systemd-udevd.service. Feb 9 10:01:39.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.356000 audit: BPF prog-id=26 op=LOAD Feb 9 10:01:39.358126 systemd[1]: Starting systemd-networkd.service... Feb 9 10:01:39.381911 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 10:01:39.439497 systemd[1]: Starting systemd-userdbd.service... Feb 9 10:01:39.438000 audit: BPF prog-id=27 op=LOAD Feb 9 10:01:39.438000 audit: BPF prog-id=28 op=LOAD Feb 9 10:01:39.438000 audit: BPF prog-id=29 op=LOAD Feb 9 10:01:39.452000 audit[1231]: AVC avc: denied { confidentiality } for pid=1231 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 10:01:39.464651 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 10:01:39.464712 kernel: hv_vmbus: registering driver hv_balloon Feb 9 10:01:39.464727 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 10:01:39.469336 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 10:01:39.481718 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 10:01:39.481776 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 10:01:39.486611 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 10:01:39.493975 kernel: Console: switching to colour dummy device 80x25 Feb 9 10:01:39.497630 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 10:01:39.452000 audit[1231]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad80eb9b0 a1=aa2c a2=ffff9e0b24b0 a3=aaaad8046010 items=12 ppid=1224 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:39.452000 audit: CWD cwd="/" Feb 9 10:01:39.452000 audit: PATH item=0 name=(null) inode=5938 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=1 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=2 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=3 name=(null) inode=11241 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=4 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=5 name=(null) inode=11242 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=6 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=7 name=(null) inode=11243 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=8 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=9 name=(null) inode=11244 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=10 name=(null) inode=11240 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PATH item=11 name=(null) inode=11245 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:01:39.452000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 10:01:39.516193 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 10:01:39.516312 kernel: hv_vmbus: registering driver hv_utils Feb 9 10:01:39.527632 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 10:01:39.527698 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 10:01:39.535626 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 10:01:39.556814 systemd[1]: Started systemd-userdbd.service. Feb 9 10:01:39.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.793518 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1232) Feb 9 10:01:39.813218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:01:39.819585 systemd[1]: Finished systemd-udev-settle.service. Feb 9 10:01:39.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.825668 systemd[1]: Starting lvm2-activation-early.service... Feb 9 10:01:39.920984 systemd-networkd[1245]: lo: Link UP Feb 9 10:01:39.920995 systemd-networkd[1245]: lo: Gained carrier Feb 9 10:01:39.921375 systemd-networkd[1245]: Enumeration completed Feb 9 10:01:39.921724 systemd[1]: Started systemd-networkd.service. Feb 9 10:01:39.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.927591 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:01:39.952526 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:01:40.001504 kernel: mlx5_core 6428:00:02.0 enP25640s1: Link up Feb 9 10:01:40.027510 kernel: hv_netvsc 0022487c-62a5-0022-487c-62a50022487c eth0: Data path switched to VF: enP25640s1 Feb 9 10:01:40.027714 systemd-networkd[1245]: enP25640s1: Link UP Feb 9 10:01:40.027799 systemd-networkd[1245]: eth0: Link UP Feb 9 10:01:40.027802 systemd-networkd[1245]: eth0: Gained carrier Feb 9 10:01:40.036838 systemd-networkd[1245]: enP25640s1: Gained carrier Feb 9 10:01:40.045581 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 10:01:40.122424 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:01:40.163444 systemd[1]: Finished lvm2-activation-early.service. Feb 9 10:01:40.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.168948 systemd[1]: Reached target cryptsetup.target. Feb 9 10:01:40.175133 systemd[1]: Starting lvm2-activation.service... Feb 9 10:01:40.179296 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:01:40.204414 systemd[1]: Finished lvm2-activation.service. Feb 9 10:01:40.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.209102 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:01:40.213636 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 10:01:40.213662 systemd[1]: Reached target local-fs.target. Feb 9 10:01:40.217940 systemd[1]: Reached target machines.target. Feb 9 10:01:40.223648 systemd[1]: Starting ldconfig.service... Feb 9 10:01:40.227425 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 10:01:40.227503 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:01:40.228685 systemd[1]: Starting systemd-boot-update.service... Feb 9 10:01:40.233950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 10:01:40.241061 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 10:01:40.245828 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:01:40.245887 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:01:40.246995 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 10:01:40.282007 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1305 (bootctl) Feb 9 10:01:40.283355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 10:01:40.496850 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 10:01:40.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.668119 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 10:01:40.744276 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 10:01:40.745446 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 10:01:40.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.756178 systemd-fsck[1313]: fsck.fat 4.2 (2021-01-31) Feb 9 10:01:40.756178 systemd-fsck[1313]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 10:01:40.757902 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 10:01:40.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.765425 systemd[1]: Mounting boot.mount... Feb 9 10:01:40.778744 systemd[1]: Mounted boot.mount. Feb 9 10:01:40.791204 systemd[1]: Finished systemd-boot-update.service. Feb 9 10:01:40.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:41.119875 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 10:01:41.186641 systemd-networkd[1245]: eth0: Gained IPv6LL Feb 9 10:01:41.192277 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 10:01:41.192539 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:01:41.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.139608 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 10:01:42.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.145950 systemd[1]: Starting audit-rules.service... Feb 9 10:01:42.151527 systemd[1]: Starting clean-ca-certificates.service... Feb 9 10:01:42.157189 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 10:01:42.161000 audit: BPF prog-id=30 op=LOAD Feb 9 10:01:42.164458 systemd[1]: Starting systemd-resolved.service... Feb 9 10:01:42.168000 audit: BPF prog-id=31 op=LOAD Feb 9 10:01:42.171181 systemd[1]: Starting systemd-timesyncd.service... Feb 9 10:01:42.176394 systemd[1]: Starting systemd-update-utmp.service... Feb 9 10:01:42.256604 systemd[1]: Finished clean-ca-certificates.service. Feb 9 10:01:42.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.262703 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 10:01:42.265626 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 10:01:42.265686 kernel: audit: type=1130 audit(1707472902.260:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.334000 audit[1325]: SYSTEM_BOOT pid=1325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.339268 systemd[1]: Finished systemd-update-utmp.service. Feb 9 10:01:42.358188 kernel: audit: type=1127 audit(1707472902.334:171): pid=1325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.375555 systemd[1]: Started systemd-timesyncd.service. Feb 9 10:01:42.377500 kernel: audit: type=1130 audit(1707472902.357:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.380346 systemd[1]: Reached target time-set.target. Feb 9 10:01:42.402253 kernel: audit: type=1130 audit(1707472902.378:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.398946 systemd-resolved[1322]: Positive Trust Anchors: Feb 9 10:01:42.398956 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:01:42.398986 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:01:42.431005 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 10:01:42.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.455505 kernel: audit: type=1130 audit(1707472902.435:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.489037 systemd-resolved[1322]: Using system hostname 'ci-3510.3.2-a-c671677e8d'. Feb 9 10:01:42.490683 systemd[1]: Started systemd-resolved.service. Feb 9 10:01:42.495609 systemd[1]: Reached target network.target. Feb 9 10:01:42.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.516096 systemd[1]: Reached target network-online.target. Feb 9 10:01:42.521310 kernel: audit: type=1130 audit(1707472902.494:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:42.521730 systemd[1]: Reached target nss-lookup.target. Feb 9 10:01:42.731985 systemd-timesyncd[1324]: Contacted time server 5.161.184.148:123 (0.flatcar.pool.ntp.org). Feb 9 10:01:42.732426 systemd-timesyncd[1324]: Initial clock synchronization to Fri 2024-02-09 10:01:42.742398 UTC. Feb 9 10:01:42.740000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:01:42.740000 audit[1340]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9e45380 a2=420 a3=0 items=0 ppid=1319 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:42.756308 augenrules[1340]: No rules Feb 9 10:01:42.779694 kernel: audit: type=1305 audit(1707472902.740:176): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:01:42.779804 kernel: audit: type=1300 audit(1707472902.740:176): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9e45380 a2=420 a3=0 items=0 ppid=1319 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:42.740000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:01:42.780458 systemd[1]: Finished audit-rules.service. Feb 9 10:01:42.791845 kernel: audit: type=1327 audit(1707472902.740:176): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:01:49.671834 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 10:01:49.687804 systemd[1]: Finished ldconfig.service. Feb 9 10:01:49.693766 systemd[1]: Starting systemd-update-done.service... Feb 9 10:01:49.731002 systemd[1]: Finished systemd-update-done.service. Feb 9 10:01:49.735945 systemd[1]: Reached target sysinit.target. Feb 9 10:01:49.740256 systemd[1]: Started motdgen.path. Feb 9 10:01:49.745272 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 10:01:49.752180 systemd[1]: Started logrotate.timer. Feb 9 10:01:49.756178 systemd[1]: Started mdadm.timer. Feb 9 10:01:49.759973 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 10:01:49.764636 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 10:01:49.764668 systemd[1]: Reached target paths.target. Feb 9 10:01:49.768797 systemd[1]: Reached target timers.target. Feb 9 10:01:49.774535 systemd[1]: Listening on dbus.socket. Feb 9 10:01:49.779514 systemd[1]: Starting docker.socket... Feb 9 10:01:49.785236 systemd[1]: Listening on sshd.socket. Feb 9 10:01:49.789313 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:01:49.789735 systemd[1]: Listening on docker.socket. Feb 9 10:01:49.793994 systemd[1]: Reached target sockets.target. Feb 9 10:01:49.798199 systemd[1]: Reached target basic.target. Feb 9 10:01:49.802351 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:01:49.802381 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:01:49.803404 systemd[1]: Starting containerd.service... Feb 9 10:01:49.808167 systemd[1]: Starting dbus.service... Feb 9 10:01:49.812413 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 10:01:49.817642 systemd[1]: Starting extend-filesystems.service... Feb 9 10:01:49.824377 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 10:01:49.825413 systemd[1]: Starting motdgen.service... Feb 9 10:01:49.829959 systemd[1]: Started nvidia.service. Feb 9 10:01:49.835392 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 10:01:49.840765 systemd[1]: Starting prepare-critools.service... Feb 9 10:01:49.846078 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 10:01:49.851725 systemd[1]: Starting sshd-keygen.service... Feb 9 10:01:49.858502 systemd[1]: Starting systemd-logind.service... Feb 9 10:01:49.862588 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:01:49.862640 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 10:01:49.863316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 10:01:49.864051 systemd[1]: Starting update-engine.service... Feb 9 10:01:49.869024 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 10:01:49.877910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 10:01:49.878078 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 10:01:49.949814 systemd-logind[1364]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 10:01:49.949990 systemd-logind[1364]: New seat seat0. Feb 9 10:01:49.957942 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 10:01:49.958117 systemd[1]: Finished motdgen.service. Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda1 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda2 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda3 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found usr Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda4 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda6 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda7 Feb 9 10:01:49.984684 extend-filesystems[1351]: Found sda9 Feb 9 10:01:49.984684 extend-filesystems[1351]: Checking size of /dev/sda9 Feb 9 10:01:50.049154 env[1372]: time="2024-02-09T10:01:50.049102158Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 10:01:50.057139 jq[1350]: false Feb 9 10:01:50.057418 jq[1368]: true Feb 9 10:01:50.076057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 10:01:50.076220 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 10:01:50.084309 tar[1370]: ./ Feb 9 10:01:50.084309 tar[1370]: ./loopback Feb 9 10:01:50.084708 env[1372]: time="2024-02-09T10:01:50.084666746Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 10:01:50.084907 env[1372]: time="2024-02-09T10:01:50.084886198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.085945 tar[1371]: crictl Feb 9 10:01:50.086781 env[1372]: time="2024-02-09T10:01:50.086747100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:01:50.086906 env[1372]: time="2024-02-09T10:01:50.086887239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.087179 env[1372]: time="2024-02-09T10:01:50.087153351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:01:50.087271 env[1372]: time="2024-02-09T10:01:50.087254513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.087341 env[1372]: time="2024-02-09T10:01:50.087324263Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 10:01:50.087400 env[1372]: time="2024-02-09T10:01:50.087386849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.087625 env[1372]: time="2024-02-09T10:01:50.087567645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.087944 env[1372]: time="2024-02-09T10:01:50.087920073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:01:50.088197 env[1372]: time="2024-02-09T10:01:50.088173299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:01:50.088293 env[1372]: time="2024-02-09T10:01:50.088277703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 10:01:50.088459 env[1372]: time="2024-02-09T10:01:50.088436850Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 10:01:50.088561 env[1372]: time="2024-02-09T10:01:50.088545496Z" level=info msg="metadata content store policy set" policy=shared Feb 9 10:01:50.124245 extend-filesystems[1351]: Old size kept for /dev/sda9 Feb 9 10:01:50.132359 extend-filesystems[1351]: Found sr0 Feb 9 10:01:50.157388 jq[1404]: true Feb 9 10:01:50.124856 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.146937877Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.146980455Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147007707Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147049084Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147128678Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147156689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147170295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147563100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147598675Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147613041Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147628568Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147642774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.147973072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 10:01:50.157584 env[1372]: time="2024-02-09T10:01:50.148084119Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 10:01:50.125042 systemd[1]: Finished extend-filesystems.service. Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148698057Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148743716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148759643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148823030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148838756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148851041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148863887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148934797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148949723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148960968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148973253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.148991941Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.149154889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.149190304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.162873 env[1372]: time="2024-02-09T10:01:50.149203750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.152680 systemd[1]: Started containerd.service. Feb 9 10:01:50.170873 env[1372]: time="2024-02-09T10:01:50.149217235Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 10:01:50.170873 env[1372]: time="2024-02-09T10:01:50.149231441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 10:01:50.170873 env[1372]: time="2024-02-09T10:01:50.149254771Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 10:01:50.170873 env[1372]: time="2024-02-09T10:01:50.149274940Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 10:01:50.170873 env[1372]: time="2024-02-09T10:01:50.149309074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.151092904Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.151177379Z" level=info msg="Connect containerd service" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.151438409Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152165795Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152305053Z" level=info msg="Start subscribing containerd event" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152359476Z" level=info msg="Start recovering state" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152420622Z" level=info msg="Start event monitor" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152437469Z" level=info msg="Start snapshots syncer" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152447513Z" level=info msg="Start cni network conf syncer for default" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152454676Z" level=info msg="Start streaming server" Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152463320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152536310Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 10:01:50.171453 env[1372]: time="2024-02-09T10:01:50.152603499Z" level=info msg="containerd successfully booted in 0.104260s" Feb 9 10:01:50.189686 tar[1370]: ./bandwidth Feb 9 10:01:50.203428 dbus-daemon[1349]: [system] SELinux support is enabled Feb 9 10:01:50.203601 systemd[1]: Started dbus.service. Feb 9 10:01:50.209244 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 10:01:50.209296 systemd[1]: Reached target system-config.target. Feb 9 10:01:50.217902 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 10:01:50.217927 systemd[1]: Reached target user-config.target. Feb 9 10:01:50.227236 systemd[1]: Started systemd-logind.service. Feb 9 10:01:50.231753 dbus-daemon[1349]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 10:01:50.273738 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 10:01:50.302522 tar[1370]: ./ptp Feb 9 10:01:50.324332 bash[1425]: Updated "/home/core/.ssh/authorized_keys" Feb 9 10:01:50.325004 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 10:01:50.382946 tar[1370]: ./vlan Feb 9 10:01:50.448048 tar[1370]: ./host-device Feb 9 10:01:50.515101 tar[1370]: ./tuning Feb 9 10:01:50.571855 tar[1370]: ./vrf Feb 9 10:01:50.632851 tar[1370]: ./sbr Feb 9 10:01:50.643885 systemd[1]: Finished prepare-critools.service. Feb 9 10:01:50.669462 tar[1370]: ./tap Feb 9 10:01:50.703030 tar[1370]: ./dhcp Feb 9 10:01:50.720450 update_engine[1366]: I0209 10:01:50.706175 1366 main.cc:92] Flatcar Update Engine starting Feb 9 10:01:50.773123 systemd[1]: Started update-engine.service. Feb 9 10:01:50.773547 update_engine[1366]: I0209 10:01:50.773154 1366 update_check_scheduler.cc:74] Next update check in 8m51s Feb 9 10:01:50.779280 systemd[1]: Started locksmithd.service. Feb 9 10:01:50.797014 tar[1370]: ./static Feb 9 10:01:50.821565 tar[1370]: ./firewall Feb 9 10:01:50.857791 tar[1370]: ./macvlan Feb 9 10:01:50.891064 tar[1370]: ./dummy Feb 9 10:01:50.923811 tar[1370]: ./bridge Feb 9 10:01:50.959536 tar[1370]: ./ipvlan Feb 9 10:01:50.992153 tar[1370]: ./portmap Feb 9 10:01:51.023378 tar[1370]: ./host-local Feb 9 10:01:51.098030 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 10:01:52.453116 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 10:01:52.930397 sshd_keygen[1367]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 10:01:52.947430 systemd[1]: Finished sshd-keygen.service. Feb 9 10:01:52.953282 systemd[1]: Starting issuegen.service... Feb 9 10:01:52.957986 systemd[1]: Started waagent.service. Feb 9 10:01:52.962589 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 10:01:52.962740 systemd[1]: Finished issuegen.service. Feb 9 10:01:52.968095 systemd[1]: Starting systemd-user-sessions.service... Feb 9 10:01:52.994097 systemd[1]: Finished systemd-user-sessions.service. Feb 9 10:01:53.000387 systemd[1]: Started getty@tty1.service. Feb 9 10:01:53.005796 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 10:01:53.011343 systemd[1]: Reached target getty.target. Feb 9 10:01:53.016165 systemd[1]: Reached target multi-user.target. Feb 9 10:01:53.022583 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 10:01:53.034514 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 10:01:53.034681 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 10:01:53.040054 systemd[1]: Startup finished in 714ms (kernel) + 16.692s (initrd) + 25.709s (userspace) = 43.116s. Feb 9 10:01:53.749419 login[1474]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 10:01:53.750868 login[1475]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 10:01:53.836846 systemd[1]: Created slice user-500.slice. Feb 9 10:01:53.837883 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 10:01:53.840780 systemd-logind[1364]: New session 2 of user core. Feb 9 10:01:53.877403 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 10:01:53.878853 systemd[1]: Starting user@500.service... Feb 9 10:01:53.912629 (systemd)[1478]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:54.260428 systemd[1478]: Queued start job for default target default.target. Feb 9 10:01:54.261432 systemd[1478]: Reached target paths.target. Feb 9 10:01:54.261459 systemd[1478]: Reached target sockets.target. Feb 9 10:01:54.261471 systemd[1478]: Reached target timers.target. Feb 9 10:01:54.261492 systemd[1478]: Reached target basic.target. Feb 9 10:01:54.261544 systemd[1478]: Reached target default.target. Feb 9 10:01:54.261571 systemd[1478]: Startup finished in 343ms. Feb 9 10:01:54.261610 systemd[1]: Started user@500.service. Feb 9 10:01:54.262549 systemd[1]: Started session-2.scope. Feb 9 10:01:54.750139 login[1474]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 10:01:54.754477 systemd[1]: Started session-1.scope. Feb 9 10:01:54.754950 systemd-logind[1364]: New session 1 of user core. Feb 9 10:02:00.404060 waagent[1471]: 2024-02-09T10:02:00.403950Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 10:02:00.440461 waagent[1471]: 2024-02-09T10:02:00.440375Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 10:02:00.445220 waagent[1471]: 2024-02-09T10:02:00.445152Z INFO Daemon Daemon Python: 3.9.16 Feb 9 10:02:00.459545 waagent[1471]: 2024-02-09T10:02:00.449757Z INFO Daemon Daemon Run daemon Feb 9 10:02:00.459545 waagent[1471]: 2024-02-09T10:02:00.454258Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 10:02:00.471381 waagent[1471]: 2024-02-09T10:02:00.471259Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 10:02:00.486464 waagent[1471]: 2024-02-09T10:02:00.486336Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 10:02:00.496392 waagent[1471]: 2024-02-09T10:02:00.496319Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 10:02:00.501478 waagent[1471]: 2024-02-09T10:02:00.501408Z INFO Daemon Daemon Using waagent for provisioning Feb 9 10:02:00.507369 waagent[1471]: 2024-02-09T10:02:00.507308Z INFO Daemon Daemon Activate resource disk Feb 9 10:02:00.512125 waagent[1471]: 2024-02-09T10:02:00.512065Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 10:02:00.526806 waagent[1471]: 2024-02-09T10:02:00.526735Z INFO Daemon Daemon Found device: None Feb 9 10:02:00.531845 waagent[1471]: 2024-02-09T10:02:00.531779Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 10:02:00.540916 waagent[1471]: 2024-02-09T10:02:00.540830Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 10:02:00.553490 waagent[1471]: 2024-02-09T10:02:00.553418Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 10:02:00.559666 waagent[1471]: 2024-02-09T10:02:00.559605Z INFO Daemon Daemon Running default provisioning handler Feb 9 10:02:00.572821 waagent[1471]: 2024-02-09T10:02:00.572696Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 10:02:00.587931 waagent[1471]: 2024-02-09T10:02:00.587801Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 10:02:00.597730 waagent[1471]: 2024-02-09T10:02:00.597657Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 10:02:00.603600 waagent[1471]: 2024-02-09T10:02:00.603531Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 10:02:00.744273 waagent[1471]: 2024-02-09T10:02:00.744083Z INFO Daemon Daemon Successfully mounted dvd Feb 9 10:02:00.859447 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 10:02:00.901506 waagent[1471]: 2024-02-09T10:02:00.901353Z INFO Daemon Daemon Detect protocol endpoint Feb 9 10:02:00.906572 waagent[1471]: 2024-02-09T10:02:00.906471Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 10:02:00.912499 waagent[1471]: 2024-02-09T10:02:00.912421Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 10:02:00.919324 waagent[1471]: 2024-02-09T10:02:00.919255Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 10:02:00.924951 waagent[1471]: 2024-02-09T10:02:00.924890Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 10:02:00.930183 waagent[1471]: 2024-02-09T10:02:00.930123Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 10:02:01.052371 waagent[1471]: 2024-02-09T10:02:01.052307Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 10:02:01.060183 waagent[1471]: 2024-02-09T10:02:01.060127Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 10:02:01.066291 waagent[1471]: 2024-02-09T10:02:01.066210Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 10:02:01.646214 waagent[1471]: 2024-02-09T10:02:01.646074Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 10:02:01.662005 waagent[1471]: 2024-02-09T10:02:01.661928Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 10:02:01.667975 waagent[1471]: 2024-02-09T10:02:01.667910Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 10:02:01.739322 waagent[1471]: 2024-02-09T10:02:01.739205Z INFO Daemon Daemon Found private key matching thumbprint 73FFE267F8AD994D88F538FCAE20A26EBDCE6526 Feb 9 10:02:01.748290 waagent[1471]: 2024-02-09T10:02:01.748199Z INFO Daemon Daemon Certificate with thumbprint 8DC666CC8F71AA8069AA7A936C2E759D2A4FCE5F has no matching private key. Feb 9 10:02:01.758707 waagent[1471]: 2024-02-09T10:02:01.758618Z INFO Daemon Daemon Fetch goal state completed Feb 9 10:02:01.815594 waagent[1471]: 2024-02-09T10:02:01.815530Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 8fa51d41-9414-4d83-be87-a4a5ecbb4272 New eTag: 12032813261933711044] Feb 9 10:02:01.827715 waagent[1471]: 2024-02-09T10:02:01.827629Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 10:02:01.843971 waagent[1471]: 2024-02-09T10:02:01.843909Z INFO Daemon Daemon Starting provisioning Feb 9 10:02:01.849273 waagent[1471]: 2024-02-09T10:02:01.849196Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 10:02:01.854265 waagent[1471]: 2024-02-09T10:02:01.854198Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-c671677e8d] Feb 9 10:02:01.941907 waagent[1471]: 2024-02-09T10:02:01.941782Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-c671677e8d] Feb 9 10:02:01.949313 waagent[1471]: 2024-02-09T10:02:01.949217Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 10:02:01.956600 waagent[1471]: 2024-02-09T10:02:01.956519Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 10:02:01.972933 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 10:02:01.973107 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 10:02:01.973162 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 10:02:01.973394 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:02:01.979528 systemd-networkd[1245]: eth0: DHCPv6 lease lost Feb 9 10:02:01.980800 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:02:01.980976 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:02:01.982950 systemd[1]: Starting systemd-networkd.service... Feb 9 10:02:02.010131 systemd-networkd[1525]: enP25640s1: Link UP Feb 9 10:02:02.010142 systemd-networkd[1525]: enP25640s1: Gained carrier Feb 9 10:02:02.011136 systemd-networkd[1525]: eth0: Link UP Feb 9 10:02:02.011147 systemd-networkd[1525]: eth0: Gained carrier Feb 9 10:02:02.011453 systemd-networkd[1525]: lo: Link UP Feb 9 10:02:02.011462 systemd-networkd[1525]: lo: Gained carrier Feb 9 10:02:02.011703 systemd-networkd[1525]: eth0: Gained IPv6LL Feb 9 10:02:02.012821 systemd-networkd[1525]: Enumeration completed Feb 9 10:02:02.012934 systemd[1]: Started systemd-networkd.service. Feb 9 10:02:02.014650 systemd-networkd[1525]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:02:02.014723 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:02:02.018760 waagent[1471]: 2024-02-09T10:02:02.018619Z INFO Daemon Daemon Create user account if not exists Feb 9 10:02:02.025043 waagent[1471]: 2024-02-09T10:02:02.024932Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 10:02:02.031325 waagent[1471]: 2024-02-09T10:02:02.031237Z INFO Daemon Daemon Configure sudoer Feb 9 10:02:02.032555 systemd-networkd[1525]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 10:02:02.037543 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:02:02.038280 waagent[1471]: 2024-02-09T10:02:02.037759Z INFO Daemon Daemon Configure sshd Feb 9 10:02:02.042316 waagent[1471]: 2024-02-09T10:02:02.042240Z INFO Daemon Daemon Deploy ssh public key. Feb 9 10:02:03.268939 waagent[1471]: 2024-02-09T10:02:03.268863Z INFO Daemon Daemon Provisioning complete Feb 9 10:02:03.290670 waagent[1471]: 2024-02-09T10:02:03.290446Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 10:02:03.297533 waagent[1471]: 2024-02-09T10:02:03.297434Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 10:02:03.309130 waagent[1471]: 2024-02-09T10:02:03.309053Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 10:02:03.606819 waagent[1534]: 2024-02-09T10:02:03.606681Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 10:02:03.607915 waagent[1534]: 2024-02-09T10:02:03.607861Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:03.608156 waagent[1534]: 2024-02-09T10:02:03.608108Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:03.620127 waagent[1534]: 2024-02-09T10:02:03.620061Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 10:02:03.620386 waagent[1534]: 2024-02-09T10:02:03.620337Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 10:02:03.684700 waagent[1534]: 2024-02-09T10:02:03.684574Z INFO ExtHandler ExtHandler Found private key matching thumbprint 73FFE267F8AD994D88F538FCAE20A26EBDCE6526 Feb 9 10:02:03.685056 waagent[1534]: 2024-02-09T10:02:03.685005Z INFO ExtHandler ExtHandler Certificate with thumbprint 8DC666CC8F71AA8069AA7A936C2E759D2A4FCE5F has no matching private key. Feb 9 10:02:03.685371 waagent[1534]: 2024-02-09T10:02:03.685322Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 10:02:03.701546 waagent[1534]: 2024-02-09T10:02:03.701475Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 8d5ef31a-c3c8-4112-87b0-cd895b5671fc New eTag: 12032813261933711044] Feb 9 10:02:03.702288 waagent[1534]: 2024-02-09T10:02:03.702234Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 10:02:03.783622 waagent[1534]: 2024-02-09T10:02:03.783472Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 10:02:03.808446 waagent[1534]: 2024-02-09T10:02:03.808363Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1534 Feb 9 10:02:03.812297 waagent[1534]: 2024-02-09T10:02:03.812236Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 10:02:03.813747 waagent[1534]: 2024-02-09T10:02:03.813691Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 10:02:03.938294 waagent[1534]: 2024-02-09T10:02:03.938186Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 10:02:03.938874 waagent[1534]: 2024-02-09T10:02:03.938818Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 10:02:03.946421 waagent[1534]: 2024-02-09T10:02:03.946369Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 10:02:03.947053 waagent[1534]: 2024-02-09T10:02:03.946999Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 10:02:03.948304 waagent[1534]: 2024-02-09T10:02:03.948244Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 10:02:03.949754 waagent[1534]: 2024-02-09T10:02:03.949686Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 10:02:03.950015 waagent[1534]: 2024-02-09T10:02:03.949948Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:03.950597 waagent[1534]: 2024-02-09T10:02:03.950522Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:03.951180 waagent[1534]: 2024-02-09T10:02:03.951119Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 10:02:03.951493 waagent[1534]: 2024-02-09T10:02:03.951429Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 10:02:03.951493 waagent[1534]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 10:02:03.951493 waagent[1534]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 10:02:03.951493 waagent[1534]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 10:02:03.951493 waagent[1534]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:03.951493 waagent[1534]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:03.951493 waagent[1534]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:03.953622 waagent[1534]: 2024-02-09T10:02:03.953437Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 10:02:03.954451 waagent[1534]: 2024-02-09T10:02:03.954380Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:03.954669 waagent[1534]: 2024-02-09T10:02:03.954611Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:03.955243 waagent[1534]: 2024-02-09T10:02:03.955178Z INFO EnvHandler ExtHandler Configure routes Feb 9 10:02:03.955393 waagent[1534]: 2024-02-09T10:02:03.955348Z INFO EnvHandler ExtHandler Gateway:None Feb 9 10:02:03.955530 waagent[1534]: 2024-02-09T10:02:03.955465Z INFO EnvHandler ExtHandler Routes:None Feb 9 10:02:03.956383 waagent[1534]: 2024-02-09T10:02:03.956326Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 10:02:03.956559 waagent[1534]: 2024-02-09T10:02:03.956469Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 10:02:03.957445 waagent[1534]: 2024-02-09T10:02:03.957353Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 10:02:03.957673 waagent[1534]: 2024-02-09T10:02:03.957604Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 10:02:03.957951 waagent[1534]: 2024-02-09T10:02:03.957888Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 10:02:03.967936 waagent[1534]: 2024-02-09T10:02:03.967865Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 10:02:03.970002 waagent[1534]: 2024-02-09T10:02:03.969942Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 10:02:03.971312 waagent[1534]: 2024-02-09T10:02:03.971257Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 10:02:04.003021 waagent[1534]: 2024-02-09T10:02:04.002891Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1525' Feb 9 10:02:04.022328 waagent[1534]: 2024-02-09T10:02:04.022264Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 10:02:04.095560 waagent[1534]: 2024-02-09T10:02:04.094665Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 10:02:04.095560 waagent[1534]: Executing ['ip', '-a', '-o', 'link']: Feb 9 10:02:04.095560 waagent[1534]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 10:02:04.095560 waagent[1534]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:62:a5 brd ff:ff:ff:ff:ff:ff Feb 9 10:02:04.095560 waagent[1534]: 3: enP25640s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:62:a5 brd ff:ff:ff:ff:ff:ff\ altname enP25640p0s2 Feb 9 10:02:04.095560 waagent[1534]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 10:02:04.095560 waagent[1534]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 10:02:04.095560 waagent[1534]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 10:02:04.095560 waagent[1534]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 10:02:04.095560 waagent[1534]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 10:02:04.095560 waagent[1534]: 2: eth0 inet6 fe80::222:48ff:fe7c:62a5/64 scope link \ valid_lft forever preferred_lft forever Feb 9 10:02:04.212726 waagent[1534]: 2024-02-09T10:02:04.212618Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 10:02:04.311781 waagent[1471]: 2024-02-09T10:02:04.311655Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 10:02:04.315409 waagent[1471]: 2024-02-09T10:02:04.315355Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 10:02:05.494566 waagent[1563]: 2024-02-09T10:02:05.494447Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 10:02:05.497520 waagent[1563]: 2024-02-09T10:02:05.497444Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 10:02:05.497769 waagent[1563]: 2024-02-09T10:02:05.497720Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 10:02:05.505811 waagent[1563]: 2024-02-09T10:02:05.505715Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 10:02:05.506281 waagent[1563]: 2024-02-09T10:02:05.506230Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:05.506545 waagent[1563]: 2024-02-09T10:02:05.506474Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:05.518882 waagent[1563]: 2024-02-09T10:02:05.518815Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 10:02:05.526930 waagent[1563]: 2024-02-09T10:02:05.526880Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 10:02:05.528012 waagent[1563]: 2024-02-09T10:02:05.527956Z INFO ExtHandler Feb 9 10:02:05.528247 waagent[1563]: 2024-02-09T10:02:05.528200Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 57e8041b-fc86-49e8-804c-d51783ad6d78 eTag: 12032813261933711044 source: Fabric] Feb 9 10:02:05.529080 waagent[1563]: 2024-02-09T10:02:05.529025Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 10:02:05.530382 waagent[1563]: 2024-02-09T10:02:05.530325Z INFO ExtHandler Feb 9 10:02:05.530629 waagent[1563]: 2024-02-09T10:02:05.530580Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 10:02:05.540068 waagent[1563]: 2024-02-09T10:02:05.540014Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 10:02:05.540669 waagent[1563]: 2024-02-09T10:02:05.540622Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 10:02:05.559348 waagent[1563]: 2024-02-09T10:02:05.559289Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 10:02:05.628214 waagent[1563]: 2024-02-09T10:02:05.628086Z INFO ExtHandler Downloaded certificate {'thumbprint': '8DC666CC8F71AA8069AA7A936C2E759D2A4FCE5F', 'hasPrivateKey': False} Feb 9 10:02:05.629404 waagent[1563]: 2024-02-09T10:02:05.629347Z INFO ExtHandler Downloaded certificate {'thumbprint': '73FFE267F8AD994D88F538FCAE20A26EBDCE6526', 'hasPrivateKey': True} Feb 9 10:02:05.630571 waagent[1563]: 2024-02-09T10:02:05.630512Z INFO ExtHandler Fetch goal state completed Feb 9 10:02:05.654660 waagent[1563]: 2024-02-09T10:02:05.654587Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1563 Feb 9 10:02:05.658346 waagent[1563]: 2024-02-09T10:02:05.658279Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 10:02:05.659967 waagent[1563]: 2024-02-09T10:02:05.659908Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 10:02:05.665295 waagent[1563]: 2024-02-09T10:02:05.665244Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 10:02:05.665817 waagent[1563]: 2024-02-09T10:02:05.665761Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 10:02:05.673683 waagent[1563]: 2024-02-09T10:02:05.673624Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 10:02:05.674310 waagent[1563]: 2024-02-09T10:02:05.674255Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 10:02:05.680373 waagent[1563]: 2024-02-09T10:02:05.680275Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 10:02:05.684161 waagent[1563]: 2024-02-09T10:02:05.684104Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 10:02:05.685796 waagent[1563]: 2024-02-09T10:02:05.685727Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 10:02:05.686157 waagent[1563]: 2024-02-09T10:02:05.686083Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:05.686818 waagent[1563]: 2024-02-09T10:02:05.686750Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:05.687410 waagent[1563]: 2024-02-09T10:02:05.687339Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 10:02:05.687750 waagent[1563]: 2024-02-09T10:02:05.687690Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 10:02:05.687750 waagent[1563]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 10:02:05.687750 waagent[1563]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 10:02:05.687750 waagent[1563]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 10:02:05.687750 waagent[1563]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:05.687750 waagent[1563]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:05.687750 waagent[1563]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 10:02:05.690015 waagent[1563]: 2024-02-09T10:02:05.689897Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 10:02:05.690592 waagent[1563]: 2024-02-09T10:02:05.690518Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 10:02:05.691017 waagent[1563]: 2024-02-09T10:02:05.690950Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 10:02:05.691632 waagent[1563]: 2024-02-09T10:02:05.691471Z INFO EnvHandler ExtHandler Configure routes Feb 9 10:02:05.691854 waagent[1563]: 2024-02-09T10:02:05.691782Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 10:02:05.692019 waagent[1563]: 2024-02-09T10:02:05.691957Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 10:02:05.694515 waagent[1563]: 2024-02-09T10:02:05.694329Z INFO EnvHandler ExtHandler Gateway:None Feb 9 10:02:05.695221 waagent[1563]: 2024-02-09T10:02:05.695145Z INFO EnvHandler ExtHandler Routes:None Feb 9 10:02:05.696805 waagent[1563]: 2024-02-09T10:02:05.696734Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 10:02:05.698672 waagent[1563]: 2024-02-09T10:02:05.698433Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 10:02:05.698960 waagent[1563]: 2024-02-09T10:02:05.698882Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 10:02:05.703401 waagent[1563]: 2024-02-09T10:02:05.703330Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 10:02:05.703401 waagent[1563]: Executing ['ip', '-a', '-o', 'link']: Feb 9 10:02:05.703401 waagent[1563]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 10:02:05.703401 waagent[1563]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:62:a5 brd ff:ff:ff:ff:ff:ff Feb 9 10:02:05.703401 waagent[1563]: 3: enP25640s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:62:a5 brd ff:ff:ff:ff:ff:ff\ altname enP25640p0s2 Feb 9 10:02:05.703401 waagent[1563]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 10:02:05.703401 waagent[1563]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 10:02:05.703401 waagent[1563]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 10:02:05.703401 waagent[1563]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 10:02:05.703401 waagent[1563]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 10:02:05.703401 waagent[1563]: 2: eth0 inet6 fe80::222:48ff:fe7c:62a5/64 scope link \ valid_lft forever preferred_lft forever Feb 9 10:02:05.720619 waagent[1563]: 2024-02-09T10:02:05.720530Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 10:02:05.721855 waagent[1563]: 2024-02-09T10:02:05.721790Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 10:02:05.758351 waagent[1563]: 2024-02-09T10:02:05.758298Z INFO ExtHandler ExtHandler Feb 9 10:02:05.758526 waagent[1563]: 2024-02-09T10:02:05.758452Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 37fbfc68-8a36-43a0-89e7-58482d34d2ae correlation f1b769aa-2dd5-433f-ad14-954eb9be2ec1 created: 2024-02-09T10:00:22.450035Z] Feb 9 10:02:05.759424 waagent[1563]: 2024-02-09T10:02:05.759358Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 10:02:05.761204 waagent[1563]: 2024-02-09T10:02:05.761151Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 9 10:02:05.797408 waagent[1563]: 2024-02-09T10:02:05.797306Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 10:02:05.818915 waagent[1563]: 2024-02-09T10:02:05.818836Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C4A242AE-FB93-47D4-AE54-A48D748B37F9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 10:02:05.956652 waagent[1563]: 2024-02-09T10:02:05.956526Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 10:02:05.956652 waagent[1563]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.956652 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.956652 waagent[1563]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.956652 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.956652 waagent[1563]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.956652 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.956652 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 10:02:05.956652 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 10:02:05.956652 waagent[1563]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 10:02:05.964141 waagent[1563]: 2024-02-09T10:02:05.964032Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 10:02:05.964141 waagent[1563]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.964141 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.964141 waagent[1563]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.964141 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.964141 waagent[1563]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 10:02:05.964141 waagent[1563]: pkts bytes target prot opt in out source destination Feb 9 10:02:05.964141 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 10:02:05.964141 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 10:02:05.964141 waagent[1563]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 10:02:05.964988 waagent[1563]: 2024-02-09T10:02:05.964939Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 10:02:27.636072 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 10:02:35.958411 update_engine[1366]: I0209 10:02:35.958058 1366 update_attempter.cc:509] Updating boot flags... Feb 9 10:02:59.354703 systemd[1]: Created slice system-sshd.slice. Feb 9 10:02:59.355936 systemd[1]: Started sshd@0-10.200.20.13:22-10.200.12.6:33378.service. Feb 9 10:03:00.014816 sshd[1654]: Accepted publickey for core from 10.200.12.6 port 33378 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:00.034094 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:00.038798 systemd[1]: Started session-3.scope. Feb 9 10:03:00.039092 systemd-logind[1364]: New session 3 of user core. Feb 9 10:03:00.407191 systemd[1]: Started sshd@1-10.200.20.13:22-10.200.12.6:33390.service. Feb 9 10:03:00.849008 sshd[1659]: Accepted publickey for core from 10.200.12.6 port 33390 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:00.850577 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:00.854615 systemd[1]: Started session-4.scope. Feb 9 10:03:00.855523 systemd-logind[1364]: New session 4 of user core. Feb 9 10:03:01.173213 sshd[1659]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:01.175511 systemd[1]: sshd@1-10.200.20.13:22-10.200.12.6:33390.service: Deactivated successfully. Feb 9 10:03:01.176176 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 10:03:01.176677 systemd-logind[1364]: Session 4 logged out. Waiting for processes to exit. Feb 9 10:03:01.177380 systemd-logind[1364]: Removed session 4. Feb 9 10:03:01.242816 systemd[1]: Started sshd@2-10.200.20.13:22-10.200.12.6:33406.service. Feb 9 10:03:01.653889 sshd[1665]: Accepted publickey for core from 10.200.12.6 port 33406 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:01.655422 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:01.659393 systemd[1]: Started session-5.scope. Feb 9 10:03:01.660546 systemd-logind[1364]: New session 5 of user core. Feb 9 10:03:01.952850 sshd[1665]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:01.955530 systemd[1]: sshd@2-10.200.20.13:22-10.200.12.6:33406.service: Deactivated successfully. Feb 9 10:03:01.956165 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 10:03:01.956667 systemd-logind[1364]: Session 5 logged out. Waiting for processes to exit. Feb 9 10:03:01.957370 systemd-logind[1364]: Removed session 5. Feb 9 10:03:02.021997 systemd[1]: Started sshd@3-10.200.20.13:22-10.200.12.6:33420.service. Feb 9 10:03:02.449042 sshd[1671]: Accepted publickey for core from 10.200.12.6 port 33420 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:02.450657 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:02.454220 systemd-logind[1364]: New session 6 of user core. Feb 9 10:03:02.454707 systemd[1]: Started session-6.scope. Feb 9 10:03:02.749454 sshd[1671]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:02.752068 systemd[1]: sshd@3-10.200.20.13:22-10.200.12.6:33420.service: Deactivated successfully. Feb 9 10:03:02.752747 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 10:03:02.753286 systemd-logind[1364]: Session 6 logged out. Waiting for processes to exit. Feb 9 10:03:02.754050 systemd-logind[1364]: Removed session 6. Feb 9 10:03:02.823124 systemd[1]: Started sshd@4-10.200.20.13:22-10.200.12.6:33432.service. Feb 9 10:03:03.234118 sshd[1677]: Accepted publickey for core from 10.200.12.6 port 33432 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:03.235679 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:03.239765 systemd[1]: Started session-7.scope. Feb 9 10:03:03.240304 systemd-logind[1364]: New session 7 of user core. Feb 9 10:03:03.772844 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 10:03:03.773045 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 10:03:04.558348 systemd[1]: Reloading. Feb 9 10:03:04.626141 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2024-02-09T10:03:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:03:04.626456 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2024-02-09T10:03:04Z" level=info msg="torcx already run" Feb 9 10:03:04.711267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:03:04.711290 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:03:04.727036 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:03:04.825692 systemd[1]: Started kubelet.service. Feb 9 10:03:04.853197 systemd[1]: Starting coreos-metadata.service... Feb 9 10:03:04.875532 kubelet[1768]: E0209 10:03:04.875459 1768 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 10:03:04.877761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:03:04.877891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:03:04.891566 coreos-metadata[1775]: Feb 09 10:03:04.891 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 10:03:04.893928 coreos-metadata[1775]: Feb 09 10:03:04.893 INFO Fetch successful Feb 9 10:03:04.894081 coreos-metadata[1775]: Feb 09 10:03:04.894 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 10:03:04.895727 coreos-metadata[1775]: Feb 09 10:03:04.895 INFO Fetch successful Feb 9 10:03:04.896115 coreos-metadata[1775]: Feb 09 10:03:04.896 INFO Fetching http://168.63.129.16/machine/c7cc565a-133c-4bc2-be49-9181e1d6ff06/5022984d%2D9322%2D463d%2D85ec%2D74d5d2c879d9.%5Fci%2D3510.3.2%2Da%2Dc671677e8d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 10:03:04.897654 coreos-metadata[1775]: Feb 09 10:03:04.897 INFO Fetch successful Feb 9 10:03:04.930809 coreos-metadata[1775]: Feb 09 10:03:04.930 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 10:03:04.945072 coreos-metadata[1775]: Feb 09 10:03:04.944 INFO Fetch successful Feb 9 10:03:04.953812 systemd[1]: Finished coreos-metadata.service. Feb 9 10:03:08.530023 systemd[1]: Stopped kubelet.service. Feb 9 10:03:08.547966 systemd[1]: Reloading. Feb 9 10:03:08.611432 /usr/lib/systemd/system-generators/torcx-generator[1835]: time="2024-02-09T10:03:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:03:08.611827 /usr/lib/systemd/system-generators/torcx-generator[1835]: time="2024-02-09T10:03:08Z" level=info msg="torcx already run" Feb 9 10:03:08.679300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:03:08.679464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:03:08.695108 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:03:08.790916 systemd[1]: Started kubelet.service. Feb 9 10:03:08.832197 kubelet[1895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:03:08.832197 kubelet[1895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:03:08.832197 kubelet[1895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:03:08.832575 kubelet[1895]: I0209 10:03:08.832247 1895 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:03:09.499672 kubelet[1895]: I0209 10:03:09.499635 1895 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:03:09.499672 kubelet[1895]: I0209 10:03:09.499666 1895 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:03:09.499890 kubelet[1895]: I0209 10:03:09.499869 1895 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:03:09.502024 kubelet[1895]: I0209 10:03:09.502003 1895 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:03:09.503734 kubelet[1895]: W0209 10:03:09.503716 1895 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:03:09.504208 kubelet[1895]: I0209 10:03:09.504191 1895 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:03:09.504411 kubelet[1895]: I0209 10:03:09.504397 1895 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:03:09.504478 kubelet[1895]: I0209 10:03:09.504463 1895 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:03:09.504590 kubelet[1895]: I0209 10:03:09.504508 1895 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:03:09.504590 kubelet[1895]: I0209 10:03:09.504520 1895 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:03:09.504643 kubelet[1895]: I0209 10:03:09.504611 1895 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:03:09.511417 kubelet[1895]: I0209 10:03:09.511388 1895 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:03:09.511417 kubelet[1895]: I0209 10:03:09.511415 1895 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:03:09.511551 kubelet[1895]: I0209 10:03:09.511439 1895 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:03:09.511551 kubelet[1895]: I0209 10:03:09.511452 1895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:03:09.511946 kubelet[1895]: E0209 10:03:09.511921 1895 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:09.511981 kubelet[1895]: E0209 10:03:09.511959 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:09.512509 kubelet[1895]: I0209 10:03:09.512439 1895 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:03:09.512753 kubelet[1895]: W0209 10:03:09.512731 1895 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:03:09.513172 kubelet[1895]: I0209 10:03:09.513149 1895 server.go:1168] "Started kubelet" Feb 9 10:03:09.514707 kubelet[1895]: I0209 10:03:09.514541 1895 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:03:09.515884 kubelet[1895]: I0209 10:03:09.515855 1895 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:03:09.524023 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:03:09.524113 kubelet[1895]: I0209 10:03:09.519518 1895 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:03:09.526157 kubelet[1895]: I0209 10:03:09.526136 1895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:03:09.526473 kubelet[1895]: E0209 10:03:09.526460 1895 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:03:09.526586 kubelet[1895]: E0209 10:03:09.526575 1895 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:03:09.531188 kubelet[1895]: I0209 10:03:09.531172 1895 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:03:09.533450 kubelet[1895]: I0209 10:03:09.533435 1895 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:03:09.547697 kubelet[1895]: W0209 10:03:09.547669 1895 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:03:09.547791 kubelet[1895]: E0209 10:03:09.547703 1895 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:03:09.547791 kubelet[1895]: E0209 10:03:09.547749 1895 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 10:03:09.547848 kubelet[1895]: W0209 10:03:09.547839 1895 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:03:09.547922 kubelet[1895]: E0209 10:03:09.547903 1895 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:03:09.547962 kubelet[1895]: W0209 10:03:09.547938 1895 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:03:09.547962 kubelet[1895]: E0209 10:03:09.547949 1895 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:03:09.548058 kubelet[1895]: E0209 10:03:09.547973 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32b811fe6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 513129958, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 513129958, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.550992 kubelet[1895]: E0209 10:03:09.550926 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32c4e0c78", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 526559864, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 526559864, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.555731 kubelet[1895]: I0209 10:03:09.555701 1895 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:03:09.555894 kubelet[1895]: I0209 10:03:09.555792 1895 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:03:09.555964 kubelet[1895]: I0209 10:03:09.555955 1895 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:03:09.556359 kubelet[1895]: E0209 10:03:09.556305 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcdc04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554793476, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554793476, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.557165 kubelet[1895]: E0209 10:03:09.557114 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcf0f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554798835, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554798835, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.558278 kubelet[1895]: E0209 10:03:09.558226 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcfaf2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554801394, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554801394, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.568547 kubelet[1895]: I0209 10:03:09.568521 1895 policy_none.go:49] "None policy: Start" Feb 9 10:03:09.569399 kubelet[1895]: I0209 10:03:09.569371 1895 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:03:09.569399 kubelet[1895]: I0209 10:03:09.569401 1895 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:03:09.575678 kubelet[1895]: I0209 10:03:09.575663 1895 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:03:09.578167 kubelet[1895]: I0209 10:03:09.576591 1895 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:03:09.578167 kubelet[1895]: I0209 10:03:09.576609 1895 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:03:09.578167 kubelet[1895]: I0209 10:03:09.576632 1895 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:03:09.578167 kubelet[1895]: E0209 10:03:09.576674 1895 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:03:09.580514 kubelet[1895]: W0209 10:03:09.580491 1895 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:03:09.580597 kubelet[1895]: E0209 10:03:09.580520 1895 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:03:09.583341 systemd[1]: Created slice kubepods.slice. Feb 9 10:03:09.587278 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:03:09.589911 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:03:09.599234 kubelet[1895]: I0209 10:03:09.599212 1895 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:03:09.599555 kubelet[1895]: I0209 10:03:09.599539 1895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:03:09.601450 kubelet[1895]: E0209 10:03:09.601433 1895 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.13\" not found" Feb 9 10:03:09.603545 kubelet[1895]: E0209 10:03:09.603439 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a330d2a350", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 602358096, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 602358096, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.632531 kubelet[1895]: I0209 10:03:09.632511 1895 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.13" Feb 9 10:03:09.633783 kubelet[1895]: E0209 10:03:09.633764 1895 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.13" Feb 9 10:03:09.634176 kubelet[1895]: E0209 10:03:09.634096 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcdc04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554793476, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 632449075, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcdc04" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.635937 kubelet[1895]: E0209 10:03:09.635882 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcf0f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554798835, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 632454513, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcf0f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.636878 kubelet[1895]: E0209 10:03:09.636815 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcfaf2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554801394, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 632457433, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcfaf2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.749285 kubelet[1895]: E0209 10:03:09.749258 1895 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 10:03:09.835266 kubelet[1895]: I0209 10:03:09.835240 1895 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.13" Feb 9 10:03:09.836343 kubelet[1895]: E0209 10:03:09.836265 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcdc04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554793476, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 835181326, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcdc04" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.836588 kubelet[1895]: E0209 10:03:09.836564 1895 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.13" Feb 9 10:03:09.837246 kubelet[1895]: E0209 10:03:09.837188 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcf0f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554798835, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 835211359, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcf0f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:09.837941 kubelet[1895]: E0209 10:03:09.837874 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcfaf2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554801394, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 835214478, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcfaf2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:10.151019 kubelet[1895]: E0209 10:03:10.150916 1895 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 10:03:10.237937 kubelet[1895]: I0209 10:03:10.237900 1895 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.13" Feb 9 10:03:10.238996 kubelet[1895]: E0209 10:03:10.238974 1895 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.13" Feb 9 10:03:10.239081 kubelet[1895]: E0209 10:03:10.239019 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcdc04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554793476, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 10, 237863976, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcdc04" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:10.239882 kubelet[1895]: E0209 10:03:10.239830 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcf0f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554798835, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 10, 237869495, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcf0f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:10.240587 kubelet[1895]: E0209 10:03:10.240535 1895 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.13.17b229a32dfcfaf2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.13", UID:"10.200.20.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.13"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 3, 9, 554801394, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 3, 10, 237872254, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.13.17b229a32dfcfaf2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:03:10.501853 kubelet[1895]: I0209 10:03:10.501467 1895 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 10:03:10.512909 kubelet[1895]: E0209 10:03:10.512885 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:10.900437 kubelet[1895]: E0209 10:03:10.900168 1895 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.13" not found Feb 9 10:03:10.955051 kubelet[1895]: E0209 10:03:10.955009 1895 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.13\" not found" node="10.200.20.13" Feb 9 10:03:11.039855 kubelet[1895]: I0209 10:03:11.039829 1895 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.13" Feb 9 10:03:11.044113 kubelet[1895]: I0209 10:03:11.044093 1895 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.13" Feb 9 10:03:11.068915 kubelet[1895]: I0209 10:03:11.068884 1895 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 10:03:11.069377 env[1372]: time="2024-02-09T10:03:11.069275508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:03:11.069939 kubelet[1895]: I0209 10:03:11.069917 1895 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 10:03:11.303915 sudo[1680]: pam_unix(sudo:session): session closed for user root Feb 9 10:03:11.386745 sshd[1677]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:11.389152 systemd[1]: sshd@4-10.200.20.13:22-10.200.12.6:33432.service: Deactivated successfully. Feb 9 10:03:11.389876 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 10:03:11.390414 systemd-logind[1364]: Session 7 logged out. Waiting for processes to exit. Feb 9 10:03:11.391124 systemd-logind[1364]: Removed session 7. Feb 9 10:03:11.513003 kubelet[1895]: E0209 10:03:11.512957 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:11.513231 kubelet[1895]: I0209 10:03:11.512964 1895 apiserver.go:52] "Watching apiserver" Feb 9 10:03:11.515532 kubelet[1895]: I0209 10:03:11.515503 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:03:11.515622 kubelet[1895]: I0209 10:03:11.515595 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:03:11.520592 systemd[1]: Created slice kubepods-besteffort-pod8e6d8933_ccab_416e_8055_b630109a8a8c.slice. Feb 9 10:03:11.528018 systemd[1]: Created slice kubepods-burstable-pod94612359_a2e2_4be9_916b_2a63495a81bd.slice. Feb 9 10:03:11.534653 kubelet[1895]: I0209 10:03:11.534621 1895 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:03:11.542803 kubelet[1895]: I0209 10:03:11.542771 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-xtables-lock\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543045 kubelet[1895]: I0209 10:03:11.543018 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-hubble-tls\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543045 kubelet[1895]: I0209 10:03:11.543058 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6d8933-ccab-416e-8055-b630109a8a8c-xtables-lock\") pod \"kube-proxy-s8zt2\" (UID: \"8e6d8933-ccab-416e-8055-b630109a8a8c\") " pod="kube-system/kube-proxy-s8zt2" Feb 9 10:03:11.543148 kubelet[1895]: I0209 10:03:11.543090 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6d8933-ccab-416e-8055-b630109a8a8c-lib-modules\") pod \"kube-proxy-s8zt2\" (UID: \"8e6d8933-ccab-416e-8055-b630109a8a8c\") " pod="kube-system/kube-proxy-s8zt2" Feb 9 10:03:11.543148 kubelet[1895]: I0209 10:03:11.543113 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz85l\" (UniqueName: \"kubernetes.io/projected/8e6d8933-ccab-416e-8055-b630109a8a8c-kube-api-access-zz85l\") pod \"kube-proxy-s8zt2\" (UID: \"8e6d8933-ccab-416e-8055-b630109a8a8c\") " pod="kube-system/kube-proxy-s8zt2" Feb 9 10:03:11.543148 kubelet[1895]: I0209 10:03:11.543134 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-run\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543218 kubelet[1895]: I0209 10:03:11.543151 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-bpf-maps\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543218 kubelet[1895]: I0209 10:03:11.543169 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-hostproc\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543218 kubelet[1895]: I0209 10:03:11.543186 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-etc-cni-netd\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543218 kubelet[1895]: I0209 10:03:11.543206 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94612359-a2e2-4be9-916b-2a63495a81bd-clustermesh-secrets\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543314 kubelet[1895]: I0209 10:03:11.543225 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e6d8933-ccab-416e-8055-b630109a8a8c-kube-proxy\") pod \"kube-proxy-s8zt2\" (UID: \"8e6d8933-ccab-416e-8055-b630109a8a8c\") " pod="kube-system/kube-proxy-s8zt2" Feb 9 10:03:11.543314 kubelet[1895]: I0209 10:03:11.543245 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-lib-modules\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543314 kubelet[1895]: I0209 10:03:11.543274 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-config-path\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543426 kubelet[1895]: I0209 10:03:11.543408 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v96t\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-kube-api-access-5v96t\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543461 kubelet[1895]: I0209 10:03:11.543445 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-cgroup\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543589 kubelet[1895]: I0209 10:03:11.543571 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cni-path\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543637 kubelet[1895]: I0209 10:03:11.543606 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-net\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543674 kubelet[1895]: I0209 10:03:11.543665 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-kernel\") pod \"cilium-z95ml\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " pod="kube-system/cilium-z95ml" Feb 9 10:03:11.543705 kubelet[1895]: I0209 10:03:11.543680 1895 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:03:11.830187 env[1372]: time="2024-02-09T10:03:11.830082940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8zt2,Uid:8e6d8933-ccab-416e-8055-b630109a8a8c,Namespace:kube-system,Attempt:0,}" Feb 9 10:03:11.841398 env[1372]: time="2024-02-09T10:03:11.841021513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z95ml,Uid:94612359-a2e2-4be9-916b-2a63495a81bd,Namespace:kube-system,Attempt:0,}" Feb 9 10:03:12.513630 kubelet[1895]: E0209 10:03:12.513600 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:13.514563 kubelet[1895]: E0209 10:03:13.514528 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:14.515347 kubelet[1895]: E0209 10:03:14.515297 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:14.845393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341893549.mount: Deactivated successfully. Feb 9 10:03:14.898390 env[1372]: time="2024-02-09T10:03:14.898342020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.914020 env[1372]: time="2024-02-09T10:03:14.913978719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.933130 env[1372]: time="2024-02-09T10:03:14.933087583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.951061 env[1372]: time="2024-02-09T10:03:14.951024129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.960780 env[1372]: time="2024-02-09T10:03:14.960748326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.972843 env[1372]: time="2024-02-09T10:03:14.972810641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:14.983708 env[1372]: time="2024-02-09T10:03:14.983551269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:15.001506 env[1372]: time="2024-02-09T10:03:15.001332086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:15.084465 env[1372]: time="2024-02-09T10:03:15.084202388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:03:15.084465 env[1372]: time="2024-02-09T10:03:15.084239061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:03:15.084465 env[1372]: time="2024-02-09T10:03:15.084249379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:03:15.084465 env[1372]: time="2024-02-09T10:03:15.084395469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82 pid=1939 runtime=io.containerd.runc.v2 Feb 9 10:03:15.101858 systemd[1]: Started cri-containerd-015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82.scope. Feb 9 10:03:15.129719 env[1372]: time="2024-02-09T10:03:15.129656372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z95ml,Uid:94612359-a2e2-4be9-916b-2a63495a81bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\"" Feb 9 10:03:15.132310 env[1372]: time="2024-02-09T10:03:15.132214418Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:03:15.133809 env[1372]: time="2024-02-09T10:03:15.133723634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:03:15.133888 env[1372]: time="2024-02-09T10:03:15.133820735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:03:15.133888 env[1372]: time="2024-02-09T10:03:15.133862086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:03:15.134080 env[1372]: time="2024-02-09T10:03:15.134043410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a26474bb8e4f51842848615dbd9c5717c28481aa9d6f172e0b226c4b433a61cb pid=1978 runtime=io.containerd.runc.v2 Feb 9 10:03:15.144559 systemd[1]: Started cri-containerd-a26474bb8e4f51842848615dbd9c5717c28481aa9d6f172e0b226c4b433a61cb.scope. Feb 9 10:03:15.170567 env[1372]: time="2024-02-09T10:03:15.170477886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8zt2,Uid:8e6d8933-ccab-416e-8055-b630109a8a8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a26474bb8e4f51842848615dbd9c5717c28481aa9d6f172e0b226c4b433a61cb\"" Feb 9 10:03:15.516387 kubelet[1895]: E0209 10:03:15.516344 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:15.838435 systemd[1]: run-containerd-runc-k8s.io-015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82-runc.6RHOcX.mount: Deactivated successfully. Feb 9 10:03:16.517453 kubelet[1895]: E0209 10:03:16.517403 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:17.517831 kubelet[1895]: E0209 10:03:17.517784 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:18.518176 kubelet[1895]: E0209 10:03:18.518139 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:19.518550 kubelet[1895]: E0209 10:03:19.518453 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:20.027325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606593467.mount: Deactivated successfully. Feb 9 10:03:20.519562 kubelet[1895]: E0209 10:03:20.519534 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:21.520188 kubelet[1895]: E0209 10:03:21.520123 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:22.520772 kubelet[1895]: E0209 10:03:22.520737 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:23.521744 kubelet[1895]: E0209 10:03:23.521713 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:23.907802 env[1372]: time="2024-02-09T10:03:23.907430409Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:23.923570 env[1372]: time="2024-02-09T10:03:23.923475699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:23.934577 env[1372]: time="2024-02-09T10:03:23.934514182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:23.935262 env[1372]: time="2024-02-09T10:03:23.935229943Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:03:23.935953 env[1372]: time="2024-02-09T10:03:23.935922427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 10:03:23.937718 env[1372]: time="2024-02-09T10:03:23.937685214Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:03:23.989804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100095683.mount: Deactivated successfully. Feb 9 10:03:23.994260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917984685.mount: Deactivated successfully. Feb 9 10:03:24.042577 env[1372]: time="2024-02-09T10:03:24.042519684Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\"" Feb 9 10:03:24.043447 env[1372]: time="2024-02-09T10:03:24.043421817Z" level=info msg="StartContainer for \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\"" Feb 9 10:03:24.060163 systemd[1]: Started cri-containerd-73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383.scope. Feb 9 10:03:24.096965 systemd[1]: cri-containerd-73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383.scope: Deactivated successfully. Feb 9 10:03:24.098380 env[1372]: time="2024-02-09T10:03:24.098336523Z" level=info msg="StartContainer for \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\" returns successfully" Feb 9 10:03:24.357410 env[1372]: time="2024-02-09T10:03:24.357352704Z" level=info msg="shim disconnected" id=73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383 Feb 9 10:03:24.357648 env[1372]: time="2024-02-09T10:03:24.357629539Z" level=warning msg="cleaning up after shim disconnected" id=73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383 namespace=k8s.io Feb 9 10:03:24.357708 env[1372]: time="2024-02-09T10:03:24.357695808Z" level=info msg="cleaning up dead shim" Feb 9 10:03:24.365322 env[1372]: time="2024-02-09T10:03:24.365287213Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2066 runtime=io.containerd.runc.v2\n" Feb 9 10:03:24.522560 kubelet[1895]: E0209 10:03:24.522524 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:24.604669 env[1372]: time="2024-02-09T10:03:24.604631234Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:03:24.685443 env[1372]: time="2024-02-09T10:03:24.685046992Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\"" Feb 9 10:03:24.685987 env[1372]: time="2024-02-09T10:03:24.685963523Z" level=info msg="StartContainer for \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\"" Feb 9 10:03:24.699967 systemd[1]: Started cri-containerd-8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568.scope. Feb 9 10:03:24.736998 env[1372]: time="2024-02-09T10:03:24.736819209Z" level=info msg="StartContainer for \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\" returns successfully" Feb 9 10:03:24.742106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:03:24.742635 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:03:24.742872 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:03:24.746759 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:03:24.747012 systemd[1]: cri-containerd-8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568.scope: Deactivated successfully. Feb 9 10:03:24.752350 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:03:24.795241 env[1372]: time="2024-02-09T10:03:24.795198391Z" level=info msg="shim disconnected" id=8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568 Feb 9 10:03:24.795455 env[1372]: time="2024-02-09T10:03:24.795437632Z" level=warning msg="cleaning up after shim disconnected" id=8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568 namespace=k8s.io Feb 9 10:03:24.795548 env[1372]: time="2024-02-09T10:03:24.795532777Z" level=info msg="cleaning up dead shim" Feb 9 10:03:24.802886 env[1372]: time="2024-02-09T10:03:24.802850266Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2131 runtime=io.containerd.runc.v2\n" Feb 9 10:03:24.988291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383-rootfs.mount: Deactivated successfully. Feb 9 10:03:25.512208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581054169.mount: Deactivated successfully. Feb 9 10:03:25.523609 kubelet[1895]: E0209 10:03:25.523539 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:25.607453 env[1372]: time="2024-02-09T10:03:25.607377610Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:03:25.681092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2498626608.mount: Deactivated successfully. Feb 9 10:03:25.708172 env[1372]: time="2024-02-09T10:03:25.708130823Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\"" Feb 9 10:03:25.708738 env[1372]: time="2024-02-09T10:03:25.708713690Z" level=info msg="StartContainer for \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\"" Feb 9 10:03:25.723314 systemd[1]: Started cri-containerd-843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a.scope. Feb 9 10:03:25.761181 systemd[1]: cri-containerd-843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a.scope: Deactivated successfully. Feb 9 10:03:25.768751 env[1372]: time="2024-02-09T10:03:25.768658275Z" level=info msg="StartContainer for \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\" returns successfully" Feb 9 10:03:26.017692 env[1372]: time="2024-02-09T10:03:26.017640926Z" level=info msg="shim disconnected" id=843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a Feb 9 10:03:26.017947 env[1372]: time="2024-02-09T10:03:26.017928762Z" level=warning msg="cleaning up after shim disconnected" id=843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a namespace=k8s.io Feb 9 10:03:26.018029 env[1372]: time="2024-02-09T10:03:26.018015788Z" level=info msg="cleaning up dead shim" Feb 9 10:03:26.038316 env[1372]: time="2024-02-09T10:03:26.038215806Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2192 runtime=io.containerd.runc.v2\n" Feb 9 10:03:26.226534 env[1372]: time="2024-02-09T10:03:26.226496396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:26.252441 env[1372]: time="2024-02-09T10:03:26.252379930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:26.263205 env[1372]: time="2024-02-09T10:03:26.263168131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:26.271870 env[1372]: time="2024-02-09T10:03:26.271835783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:26.272182 env[1372]: time="2024-02-09T10:03:26.272152134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 10:03:26.274059 env[1372]: time="2024-02-09T10:03:26.274011805Z" level=info msg="CreateContainer within sandbox \"a26474bb8e4f51842848615dbd9c5717c28481aa9d6f172e0b226c4b433a61cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:03:26.328187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564337758.mount: Deactivated successfully. Feb 9 10:03:26.333382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514331125.mount: Deactivated successfully. Feb 9 10:03:26.365766 env[1372]: time="2024-02-09T10:03:26.365722698Z" level=info msg="CreateContainer within sandbox \"a26474bb8e4f51842848615dbd9c5717c28481aa9d6f172e0b226c4b433a61cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d9594e18903fcb64bac9212db1f8d5f2629f5b17c9a903eefacdceec1802de2\"" Feb 9 10:03:26.366765 env[1372]: time="2024-02-09T10:03:26.366740340Z" level=info msg="StartContainer for \"1d9594e18903fcb64bac9212db1f8d5f2629f5b17c9a903eefacdceec1802de2\"" Feb 9 10:03:26.384542 systemd[1]: Started cri-containerd-1d9594e18903fcb64bac9212db1f8d5f2629f5b17c9a903eefacdceec1802de2.scope. Feb 9 10:03:26.425193 env[1372]: time="2024-02-09T10:03:26.425121138Z" level=info msg="StartContainer for \"1d9594e18903fcb64bac9212db1f8d5f2629f5b17c9a903eefacdceec1802de2\" returns successfully" Feb 9 10:03:26.524457 kubelet[1895]: E0209 10:03:26.524428 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:26.613318 env[1372]: time="2024-02-09T10:03:26.612985913Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:03:26.633149 kubelet[1895]: I0209 10:03:26.633115 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s8zt2" podStartSLOduration=4.532343098 podCreationTimestamp="2024-02-09 10:03:11 +0000 UTC" firstStartedPulling="2024-02-09 10:03:15.171713038 +0000 UTC m=+6.377507394" lastFinishedPulling="2024-02-09 10:03:26.272444648 +0000 UTC m=+17.478239004" observedRunningTime="2024-02-09 10:03:26.616416219 +0000 UTC m=+17.822210575" watchObservedRunningTime="2024-02-09 10:03:26.633074708 +0000 UTC m=+17.838869064" Feb 9 10:03:26.688235 env[1372]: time="2024-02-09T10:03:26.688179456Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\"" Feb 9 10:03:26.689019 env[1372]: time="2024-02-09T10:03:26.688986370Z" level=info msg="StartContainer for \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\"" Feb 9 10:03:26.703588 systemd[1]: Started cri-containerd-f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40.scope. Feb 9 10:03:26.733026 systemd[1]: cri-containerd-f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40.scope: Deactivated successfully. Feb 9 10:03:26.735353 env[1372]: time="2024-02-09T10:03:26.735053604Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94612359_a2e2_4be9_916b_2a63495a81bd.slice/cri-containerd-f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40.scope/memory.events\": no such file or directory" Feb 9 10:03:26.745992 env[1372]: time="2024-02-09T10:03:26.745951509Z" level=info msg="StartContainer for \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\" returns successfully" Feb 9 10:03:26.864389 env[1372]: time="2024-02-09T10:03:26.863903480Z" level=info msg="shim disconnected" id=f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40 Feb 9 10:03:26.864389 env[1372]: time="2024-02-09T10:03:26.863947993Z" level=warning msg="cleaning up after shim disconnected" id=f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40 namespace=k8s.io Feb 9 10:03:26.864389 env[1372]: time="2024-02-09T10:03:26.863957551Z" level=info msg="cleaning up dead shim" Feb 9 10:03:26.870716 env[1372]: time="2024-02-09T10:03:26.870671987Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2360 runtime=io.containerd.runc.v2\n" Feb 9 10:03:27.525060 kubelet[1895]: E0209 10:03:27.525026 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:27.617039 env[1372]: time="2024-02-09T10:03:27.616998539Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:03:27.674631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759111095.mount: Deactivated successfully. Feb 9 10:03:27.678643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294566976.mount: Deactivated successfully. Feb 9 10:03:27.707268 env[1372]: time="2024-02-09T10:03:27.707218490Z" level=info msg="CreateContainer within sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\"" Feb 9 10:03:27.708052 env[1372]: time="2024-02-09T10:03:27.708018129Z" level=info msg="StartContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\"" Feb 9 10:03:27.721516 systemd[1]: Started cri-containerd-09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da.scope. Feb 9 10:03:27.760221 env[1372]: time="2024-02-09T10:03:27.760176992Z" level=info msg="StartContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" returns successfully" Feb 9 10:03:27.843801 kubelet[1895]: I0209 10:03:27.842443 1895 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:03:27.850536 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:03:28.182553 kernel: Initializing XFRM netlink socket Feb 9 10:03:28.191517 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:03:28.525874 kubelet[1895]: E0209 10:03:28.525838 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:29.428515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:03:29.428933 systemd-networkd[1525]: cilium_host: Link UP Feb 9 10:03:29.429044 systemd-networkd[1525]: cilium_net: Link UP Feb 9 10:03:29.429047 systemd-networkd[1525]: cilium_net: Gained carrier Feb 9 10:03:29.429155 systemd-networkd[1525]: cilium_host: Gained carrier Feb 9 10:03:29.429318 systemd-networkd[1525]: cilium_host: Gained IPv6LL Feb 9 10:03:29.512240 kubelet[1895]: E0209 10:03:29.512205 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:29.526385 kubelet[1895]: E0209 10:03:29.526348 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:29.593818 systemd-networkd[1525]: cilium_vxlan: Link UP Feb 9 10:03:29.593824 systemd-networkd[1525]: cilium_vxlan: Gained carrier Feb 9 10:03:29.858511 kernel: NET: Registered PF_ALG protocol family Feb 9 10:03:30.434647 systemd-networkd[1525]: cilium_net: Gained IPv6LL Feb 9 10:03:30.528449 kubelet[1895]: E0209 10:03:30.528342 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:30.542521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:03:30.547150 systemd-networkd[1525]: lxc_health: Link UP Feb 9 10:03:30.547347 systemd-networkd[1525]: lxc_health: Gained carrier Feb 9 10:03:30.626687 systemd-networkd[1525]: cilium_vxlan: Gained IPv6LL Feb 9 10:03:31.117988 kubelet[1895]: I0209 10:03:31.117959 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z95ml" podStartSLOduration=11.313926602 podCreationTimestamp="2024-02-09 10:03:11 +0000 UTC" firstStartedPulling="2024-02-09 10:03:15.131695842 +0000 UTC m=+6.337490198" lastFinishedPulling="2024-02-09 10:03:23.935687587 +0000 UTC m=+15.141482023" observedRunningTime="2024-02-09 10:03:28.633572972 +0000 UTC m=+19.839367328" watchObservedRunningTime="2024-02-09 10:03:31.117918427 +0000 UTC m=+22.323712783" Feb 9 10:03:31.118435 kubelet[1895]: I0209 10:03:31.118418 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:03:31.122909 systemd[1]: Created slice kubepods-besteffort-pod8d94aa24_853f_46c5_afa3_ae0428df6634.slice. Feb 9 10:03:31.151182 kubelet[1895]: I0209 10:03:31.151131 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9p5z\" (UniqueName: \"kubernetes.io/projected/8d94aa24-853f-46c5-afa3-ae0428df6634-kube-api-access-d9p5z\") pod \"nginx-deployment-845c78c8b9-gc8lt\" (UID: \"8d94aa24-853f-46c5-afa3-ae0428df6634\") " pod="default/nginx-deployment-845c78c8b9-gc8lt" Feb 9 10:03:31.427013 env[1372]: time="2024-02-09T10:03:31.426913809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-gc8lt,Uid:8d94aa24-853f-46c5-afa3-ae0428df6634,Namespace:default,Attempt:0,}" Feb 9 10:03:31.528965 kubelet[1895]: E0209 10:03:31.528925 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:31.539621 systemd-networkd[1525]: lxc59d77da5efbb: Link UP Feb 9 10:03:31.549974 kernel: eth0: renamed from tmpafd08 Feb 9 10:03:31.562858 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:03:31.562976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc59d77da5efbb: link becomes ready Feb 9 10:03:31.563151 systemd-networkd[1525]: lxc59d77da5efbb: Gained carrier Feb 9 10:03:32.034644 systemd-networkd[1525]: lxc_health: Gained IPv6LL Feb 9 10:03:32.529925 kubelet[1895]: E0209 10:03:32.529894 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:33.058671 systemd-networkd[1525]: lxc59d77da5efbb: Gained IPv6LL Feb 9 10:03:33.530719 kubelet[1895]: E0209 10:03:33.530689 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:34.531328 kubelet[1895]: E0209 10:03:34.531272 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:35.026816 env[1372]: time="2024-02-09T10:03:35.026752203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:03:35.027175 env[1372]: time="2024-02-09T10:03:35.027149832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:03:35.027282 env[1372]: time="2024-02-09T10:03:35.027260058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:03:35.027577 env[1372]: time="2024-02-09T10:03:35.027527104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd08fa02faa47f2f26d114a3cdd83660581f16d2fbb3e3d6033788cf34965e3 pid=2925 runtime=io.containerd.runc.v2 Feb 9 10:03:35.044825 systemd[1]: Started cri-containerd-afd08fa02faa47f2f26d114a3cdd83660581f16d2fbb3e3d6033788cf34965e3.scope. Feb 9 10:03:35.079607 env[1372]: time="2024-02-09T10:03:35.079566974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-gc8lt,Uid:8d94aa24-853f-46c5-afa3-ae0428df6634,Namespace:default,Attempt:0,} returns sandbox id \"afd08fa02faa47f2f26d114a3cdd83660581f16d2fbb3e3d6033788cf34965e3\"" Feb 9 10:03:35.081265 env[1372]: time="2024-02-09T10:03:35.081241239Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:03:35.532412 kubelet[1895]: E0209 10:03:35.532370 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:36.533116 kubelet[1895]: E0209 10:03:36.533069 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:36.946788 kubelet[1895]: I0209 10:03:36.946433 1895 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 10:03:37.534007 kubelet[1895]: E0209 10:03:37.533975 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:37.995583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912399507.mount: Deactivated successfully. Feb 9 10:03:38.535034 kubelet[1895]: E0209 10:03:38.535001 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:39.160175 env[1372]: time="2024-02-09T10:03:39.160116475Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:39.176052 env[1372]: time="2024-02-09T10:03:39.176014387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:39.186995 env[1372]: time="2024-02-09T10:03:39.186953407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:39.197175 env[1372]: time="2024-02-09T10:03:39.197129038Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:39.198088 env[1372]: time="2024-02-09T10:03:39.198055568Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:03:39.199650 env[1372]: time="2024-02-09T10:03:39.199614343Z" level=info msg="CreateContainer within sandbox \"afd08fa02faa47f2f26d114a3cdd83660581f16d2fbb3e3d6033788cf34965e3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 10:03:39.242138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737415159.mount: Deactivated successfully. Feb 9 10:03:39.246292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432756455.mount: Deactivated successfully. Feb 9 10:03:39.273325 env[1372]: time="2024-02-09T10:03:39.273245955Z" level=info msg="CreateContainer within sandbox \"afd08fa02faa47f2f26d114a3cdd83660581f16d2fbb3e3d6033788cf34965e3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fb4e313e72b3bf6a4ba5951837ad08cedc4fc137b0ea88d2f87e9ceced1b4957\"" Feb 9 10:03:39.274303 env[1372]: time="2024-02-09T10:03:39.274278793Z" level=info msg="StartContainer for \"fb4e313e72b3bf6a4ba5951837ad08cedc4fc137b0ea88d2f87e9ceced1b4957\"" Feb 9 10:03:39.290902 systemd[1]: Started cri-containerd-fb4e313e72b3bf6a4ba5951837ad08cedc4fc137b0ea88d2f87e9ceced1b4957.scope. Feb 9 10:03:39.327837 env[1372]: time="2024-02-09T10:03:39.327790795Z" level=info msg="StartContainer for \"fb4e313e72b3bf6a4ba5951837ad08cedc4fc137b0ea88d2f87e9ceced1b4957\" returns successfully" Feb 9 10:03:39.535139 kubelet[1895]: E0209 10:03:39.535103 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:39.677927 kubelet[1895]: I0209 10:03:39.677896 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-gc8lt" podStartSLOduration=4.560413277 podCreationTimestamp="2024-02-09 10:03:31 +0000 UTC" firstStartedPulling="2024-02-09 10:03:35.08084593 +0000 UTC m=+26.286640286" lastFinishedPulling="2024-02-09 10:03:39.198297099 +0000 UTC m=+30.404091415" observedRunningTime="2024-02-09 10:03:39.677688467 +0000 UTC m=+30.883482823" watchObservedRunningTime="2024-02-09 10:03:39.677864406 +0000 UTC m=+30.883658762" Feb 9 10:03:40.536179 kubelet[1895]: E0209 10:03:40.536145 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:41.537008 kubelet[1895]: E0209 10:03:41.536975 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:42.537338 kubelet[1895]: E0209 10:03:42.537301 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:43.537675 kubelet[1895]: E0209 10:03:43.537637 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:44.537897 kubelet[1895]: E0209 10:03:44.537865 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:45.538601 kubelet[1895]: E0209 10:03:45.538564 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:45.997962 kubelet[1895]: I0209 10:03:45.997676 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:03:46.002585 systemd[1]: Created slice kubepods-besteffort-pod0c6735d4_7702_455d_933b_600fd1da9ab8.slice. Feb 9 10:03:46.027374 kubelet[1895]: I0209 10:03:46.027338 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgrb\" (UniqueName: \"kubernetes.io/projected/0c6735d4-7702-455d-933b-600fd1da9ab8-kube-api-access-sdgrb\") pod \"nfs-server-provisioner-0\" (UID: \"0c6735d4-7702-455d-933b-600fd1da9ab8\") " pod="default/nfs-server-provisioner-0" Feb 9 10:03:46.027578 kubelet[1895]: I0209 10:03:46.027395 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0c6735d4-7702-455d-933b-600fd1da9ab8-data\") pod \"nfs-server-provisioner-0\" (UID: \"0c6735d4-7702-455d-933b-600fd1da9ab8\") " pod="default/nfs-server-provisioner-0" Feb 9 10:03:46.305889 env[1372]: time="2024-02-09T10:03:46.305842831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0c6735d4-7702-455d-933b-600fd1da9ab8,Namespace:default,Attempt:0,}" Feb 9 10:03:46.398936 systemd-networkd[1525]: lxcdf270918b6a3: Link UP Feb 9 10:03:46.410121 kernel: eth0: renamed from tmpa169a Feb 9 10:03:46.422598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:03:46.423073 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdf270918b6a3: link becomes ready Feb 9 10:03:46.422756 systemd-networkd[1525]: lxcdf270918b6a3: Gained carrier Feb 9 10:03:46.538838 kubelet[1895]: E0209 10:03:46.538770 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:46.663149 env[1372]: time="2024-02-09T10:03:46.662709631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:03:46.663317 env[1372]: time="2024-02-09T10:03:46.663291091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:03:46.663400 env[1372]: time="2024-02-09T10:03:46.663380561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:03:46.663723 env[1372]: time="2024-02-09T10:03:46.663678210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd pid=3046 runtime=io.containerd.runc.v2 Feb 9 10:03:46.681476 systemd[1]: Started cri-containerd-a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd.scope. Feb 9 10:03:46.713130 env[1372]: time="2024-02-09T10:03:46.713082209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0c6735d4-7702-455d-933b-600fd1da9ab8,Namespace:default,Attempt:0,} returns sandbox id \"a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd\"" Feb 9 10:03:46.715230 env[1372]: time="2024-02-09T10:03:46.715204188Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 10:03:47.140365 systemd[1]: run-containerd-runc-k8s.io-a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd-runc.HuJ8Wy.mount: Deactivated successfully. Feb 9 10:03:47.539646 kubelet[1895]: E0209 10:03:47.539591 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:47.906721 systemd-networkd[1525]: lxcdf270918b6a3: Gained IPv6LL Feb 9 10:03:48.539916 kubelet[1895]: E0209 10:03:48.539873 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:49.512280 kubelet[1895]: E0209 10:03:49.512243 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:49.540539 kubelet[1895]: E0209 10:03:49.540513 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:49.746731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795149905.mount: Deactivated successfully. Feb 9 10:03:50.541344 kubelet[1895]: E0209 10:03:50.541286 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:51.542000 kubelet[1895]: E0209 10:03:51.541963 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:51.891791 env[1372]: time="2024-02-09T10:03:51.891682685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:51.909800 env[1372]: time="2024-02-09T10:03:51.909751950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:51.925186 env[1372]: time="2024-02-09T10:03:51.924800506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:51.935175 env[1372]: time="2024-02-09T10:03:51.935141593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:03:51.936471 env[1372]: time="2024-02-09T10:03:51.935788491Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 10:03:51.938845 env[1372]: time="2024-02-09T10:03:51.938818640Z" level=info msg="CreateContainer within sandbox \"a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 10:03:51.980013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075680711.mount: Deactivated successfully. Feb 9 10:03:51.985110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794375360.mount: Deactivated successfully. Feb 9 10:03:52.022380 env[1372]: time="2024-02-09T10:03:52.022317335Z" level=info msg="CreateContainer within sandbox \"a169a243487acce972af60273c0cd1ebc825e0e4dfdc79ad249094c871d980bd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7e4f251136bd230acf5de619dd0efec4321e6867389ab9faad3fdf18f10f1182\"" Feb 9 10:03:52.022984 env[1372]: time="2024-02-09T10:03:52.022869763Z" level=info msg="StartContainer for \"7e4f251136bd230acf5de619dd0efec4321e6867389ab9faad3fdf18f10f1182\"" Feb 9 10:03:52.039493 systemd[1]: Started cri-containerd-7e4f251136bd230acf5de619dd0efec4321e6867389ab9faad3fdf18f10f1182.scope. Feb 9 10:03:52.072225 env[1372]: time="2024-02-09T10:03:52.072179704Z" level=info msg="StartContainer for \"7e4f251136bd230acf5de619dd0efec4321e6867389ab9faad3fdf18f10f1182\" returns successfully" Feb 9 10:03:52.542803 kubelet[1895]: E0209 10:03:52.542773 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:52.705971 kubelet[1895]: I0209 10:03:52.705933 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.48382496 podCreationTimestamp="2024-02-09 10:03:45 +0000 UTC" firstStartedPulling="2024-02-09 10:03:46.714682802 +0000 UTC m=+37.920477158" lastFinishedPulling="2024-02-09 10:03:51.936758157 +0000 UTC m=+43.142552513" observedRunningTime="2024-02-09 10:03:52.70552995 +0000 UTC m=+43.911324306" watchObservedRunningTime="2024-02-09 10:03:52.705900315 +0000 UTC m=+43.911694671" Feb 9 10:03:53.543456 kubelet[1895]: E0209 10:03:53.543425 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:54.544115 kubelet[1895]: E0209 10:03:54.544084 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:55.545258 kubelet[1895]: E0209 10:03:55.545226 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:56.546135 kubelet[1895]: E0209 10:03:56.546106 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:57.546628 kubelet[1895]: E0209 10:03:57.546563 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:58.547118 kubelet[1895]: E0209 10:03:58.547079 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:03:59.547326 kubelet[1895]: E0209 10:03:59.547299 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:00.548104 kubelet[1895]: E0209 10:04:00.548063 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:01.549201 kubelet[1895]: E0209 10:04:01.549171 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:02.160778 kubelet[1895]: I0209 10:04:02.160747 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:02.165063 systemd[1]: Created slice kubepods-besteffort-pod1dde41df_476d_4a2e_bf8e_29bb1009a312.slice. Feb 9 10:04:02.313291 kubelet[1895]: I0209 10:04:02.313259 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c916cc66-c2dd-448f-8459-95206542dfd6\" (UniqueName: \"kubernetes.io/nfs/1dde41df-476d-4a2e-bf8e-29bb1009a312-pvc-c916cc66-c2dd-448f-8459-95206542dfd6\") pod \"test-pod-1\" (UID: \"1dde41df-476d-4a2e-bf8e-29bb1009a312\") " pod="default/test-pod-1" Feb 9 10:04:02.313447 kubelet[1895]: I0209 10:04:02.313314 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f942\" (UniqueName: \"kubernetes.io/projected/1dde41df-476d-4a2e-bf8e-29bb1009a312-kube-api-access-9f942\") pod \"test-pod-1\" (UID: \"1dde41df-476d-4a2e-bf8e-29bb1009a312\") " pod="default/test-pod-1" Feb 9 10:04:02.550076 kubelet[1895]: E0209 10:04:02.550046 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:02.655517 kernel: FS-Cache: Loaded Feb 9 10:04:02.780602 kernel: RPC: Registered named UNIX socket transport module. Feb 9 10:04:02.780716 kernel: RPC: Registered udp transport module. Feb 9 10:04:02.784192 kernel: RPC: Registered tcp transport module. Feb 9 10:04:02.788936 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 10:04:02.935511 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 10:04:03.116635 kernel: NFS: Registering the id_resolver key type Feb 9 10:04:03.116763 kernel: Key type id_resolver registered Feb 9 10:04:03.119687 kernel: Key type id_legacy registered Feb 9 10:04:03.486682 nfsidmap[3167]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c671677e8d' Feb 9 10:04:03.550707 kubelet[1895]: E0209 10:04:03.550672 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:03.589072 nfsidmap[3168]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c671677e8d' Feb 9 10:04:03.668637 env[1372]: time="2024-02-09T10:04:03.668595823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1dde41df-476d-4a2e-bf8e-29bb1009a312,Namespace:default,Attempt:0,}" Feb 9 10:04:03.763243 systemd-networkd[1525]: lxca7c8abf361aa: Link UP Feb 9 10:04:03.773783 kernel: eth0: renamed from tmp9c5d7 Feb 9 10:04:03.791892 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:04:03.792016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca7c8abf361aa: link becomes ready Feb 9 10:04:03.792206 systemd-networkd[1525]: lxca7c8abf361aa: Gained carrier Feb 9 10:04:03.979634 env[1372]: time="2024-02-09T10:04:03.979562930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:03.979773 env[1372]: time="2024-02-09T10:04:03.979645203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:03.979773 env[1372]: time="2024-02-09T10:04:03.979671281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:03.979842 env[1372]: time="2024-02-09T10:04:03.979807870Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c5d79bd6a575582995a9f4bc8b654afc2ffa32ae7a37d82b9b42c48ab06ff08 pid=3197 runtime=io.containerd.runc.v2 Feb 9 10:04:03.990794 systemd[1]: Started cri-containerd-9c5d79bd6a575582995a9f4bc8b654afc2ffa32ae7a37d82b9b42c48ab06ff08.scope. Feb 9 10:04:04.026387 env[1372]: time="2024-02-09T10:04:04.025584933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1dde41df-476d-4a2e-bf8e-29bb1009a312,Namespace:default,Attempt:0,} returns sandbox id \"9c5d79bd6a575582995a9f4bc8b654afc2ffa32ae7a37d82b9b42c48ab06ff08\"" Feb 9 10:04:04.027422 env[1372]: time="2024-02-09T10:04:04.027272319Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:04:04.477231 env[1372]: time="2024-02-09T10:04:04.476868946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:04.496580 env[1372]: time="2024-02-09T10:04:04.496543024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:04.507309 env[1372]: time="2024-02-09T10:04:04.507266733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:04.516116 env[1372]: time="2024-02-09T10:04:04.516075114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:04.516938 env[1372]: time="2024-02-09T10:04:04.516913047Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:04:04.519271 env[1372]: time="2024-02-09T10:04:04.519238182Z" level=info msg="CreateContainer within sandbox \"9c5d79bd6a575582995a9f4bc8b654afc2ffa32ae7a37d82b9b42c48ab06ff08\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 10:04:04.550951 kubelet[1895]: E0209 10:04:04.550910 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:04.596612 env[1372]: time="2024-02-09T10:04:04.596121839Z" level=info msg="CreateContainer within sandbox \"9c5d79bd6a575582995a9f4bc8b654afc2ffa32ae7a37d82b9b42c48ab06ff08\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ecdf7153588bc25f888f61ea8a7f009c39ffd0b6dc956fbb7cbe5b05e33115ba\"" Feb 9 10:04:04.596905 env[1372]: time="2024-02-09T10:04:04.596803385Z" level=info msg="StartContainer for \"ecdf7153588bc25f888f61ea8a7f009c39ffd0b6dc956fbb7cbe5b05e33115ba\"" Feb 9 10:04:04.612707 systemd[1]: Started cri-containerd-ecdf7153588bc25f888f61ea8a7f009c39ffd0b6dc956fbb7cbe5b05e33115ba.scope. Feb 9 10:04:04.649674 env[1372]: time="2024-02-09T10:04:04.649619792Z" level=info msg="StartContainer for \"ecdf7153588bc25f888f61ea8a7f009c39ffd0b6dc956fbb7cbe5b05e33115ba\" returns successfully" Feb 9 10:04:04.724318 kubelet[1895]: I0209 10:04:04.724284 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.23359498 podCreationTimestamp="2024-02-09 10:03:46 +0000 UTC" firstStartedPulling="2024-02-09 10:04:04.027058416 +0000 UTC m=+55.232852772" lastFinishedPulling="2024-02-09 10:04:04.517700944 +0000 UTC m=+55.723495300" observedRunningTime="2024-02-09 10:04:04.723751026 +0000 UTC m=+55.929545382" watchObservedRunningTime="2024-02-09 10:04:04.724237508 +0000 UTC m=+55.930031824" Feb 9 10:04:05.250622 systemd-networkd[1525]: lxca7c8abf361aa: Gained IPv6LL Feb 9 10:04:05.551270 kubelet[1895]: E0209 10:04:05.551233 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:06.551593 kubelet[1895]: E0209 10:04:06.551553 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:07.552401 kubelet[1895]: E0209 10:04:07.552367 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:07.693676 systemd[1]: run-containerd-runc-k8s.io-09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da-runc.qfDDYO.mount: Deactivated successfully. Feb 9 10:04:07.706350 env[1372]: time="2024-02-09T10:04:07.706292547Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:04:07.712123 env[1372]: time="2024-02-09T10:04:07.712091944Z" level=info msg="StopContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" with timeout 1 (s)" Feb 9 10:04:07.712604 env[1372]: time="2024-02-09T10:04:07.712582746Z" level=info msg="Stop container \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" with signal terminated" Feb 9 10:04:07.718316 systemd-networkd[1525]: lxc_health: Link DOWN Feb 9 10:04:07.718324 systemd-networkd[1525]: lxc_health: Lost carrier Feb 9 10:04:07.743987 systemd[1]: cri-containerd-09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da.scope: Deactivated successfully. Feb 9 10:04:07.744293 systemd[1]: cri-containerd-09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da.scope: Consumed 6.152s CPU time. Feb 9 10:04:07.760032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da-rootfs.mount: Deactivated successfully. Feb 9 10:04:08.355853 env[1372]: time="2024-02-09T10:04:08.355803110Z" level=info msg="shim disconnected" id=09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da Feb 9 10:04:08.355853 env[1372]: time="2024-02-09T10:04:08.355850306Z" level=warning msg="cleaning up after shim disconnected" id=09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da namespace=k8s.io Feb 9 10:04:08.355853 env[1372]: time="2024-02-09T10:04:08.355862705Z" level=info msg="cleaning up dead shim" Feb 9 10:04:08.363228 env[1372]: time="2024-02-09T10:04:08.363189312Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" Feb 9 10:04:08.376117 env[1372]: time="2024-02-09T10:04:08.376059300Z" level=info msg="StopContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" returns successfully" Feb 9 10:04:08.376864 env[1372]: time="2024-02-09T10:04:08.376840801Z" level=info msg="StopPodSandbox for \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\"" Feb 9 10:04:08.377013 env[1372]: time="2024-02-09T10:04:08.376991950Z" level=info msg="Container to stop \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:08.377084 env[1372]: time="2024-02-09T10:04:08.377067184Z" level=info msg="Container to stop \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:08.377145 env[1372]: time="2024-02-09T10:04:08.377128900Z" level=info msg="Container to stop \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:08.377207 env[1372]: time="2024-02-09T10:04:08.377191295Z" level=info msg="Container to stop \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:08.377269 env[1372]: time="2024-02-09T10:04:08.377252970Z" level=info msg="Container to stop \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:08.378849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82-shm.mount: Deactivated successfully. Feb 9 10:04:08.384307 systemd[1]: cri-containerd-015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82.scope: Deactivated successfully. Feb 9 10:04:08.423715 env[1372]: time="2024-02-09T10:04:08.423663426Z" level=info msg="shim disconnected" id=015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82 Feb 9 10:04:08.423715 env[1372]: time="2024-02-09T10:04:08.423713062Z" level=warning msg="cleaning up after shim disconnected" id=015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82 namespace=k8s.io Feb 9 10:04:08.423938 env[1372]: time="2024-02-09T10:04:08.423722581Z" level=info msg="cleaning up dead shim" Feb 9 10:04:08.431143 env[1372]: time="2024-02-09T10:04:08.431103624Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3357 runtime=io.containerd.runc.v2\n" Feb 9 10:04:08.431411 env[1372]: time="2024-02-09T10:04:08.431383923Z" level=info msg="TearDown network for sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" successfully" Feb 9 10:04:08.431457 env[1372]: time="2024-02-09T10:04:08.431410641Z" level=info msg="StopPodSandbox for \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" returns successfully" Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543627 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94612359-a2e2-4be9-916b-2a63495a81bd-clustermesh-secrets\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543668 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-run\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543687 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-hostproc\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543707 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-cgroup\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543729 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-xtables-lock\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545316 kubelet[1895]: I0209 10:04:08.543748 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-bpf-maps\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543764 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-etc-cni-netd\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543788 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-config-path\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543809 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v96t\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-kube-api-access-5v96t\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543825 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cni-path\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543842 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-net\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545636 kubelet[1895]: I0209 10:04:08.543864 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-hubble-tls\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545776 kubelet[1895]: I0209 10:04:08.543881 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-kernel\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545776 kubelet[1895]: I0209 10:04:08.543901 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-lib-modules\") pod \"94612359-a2e2-4be9-916b-2a63495a81bd\" (UID: \"94612359-a2e2-4be9-916b-2a63495a81bd\") " Feb 9 10:04:08.545776 kubelet[1895]: I0209 10:04:08.543946 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545776 kubelet[1895]: I0209 10:04:08.543977 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545776 kubelet[1895]: I0209 10:04:08.543994 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545891 kubelet[1895]: I0209 10:04:08.544011 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545891 kubelet[1895]: I0209 10:04:08.544025 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545891 kubelet[1895]: I0209 10:04:08.544038 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545891 kubelet[1895]: I0209 10:04:08.544052 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.545891 kubelet[1895]: W0209 10:04:08.544180 1895 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/94612359-a2e2-4be9-916b-2a63495a81bd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:04:08.546007 kubelet[1895]: I0209 10:04:08.544383 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.546007 kubelet[1895]: I0209 10:04:08.545809 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:04:08.546058 kubelet[1895]: I0209 10:04:08.546008 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.546058 kubelet[1895]: I0209 10:04:08.546031 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.548159 kubelet[1895]: I0209 10:04:08.548131 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94612359-a2e2-4be9-916b-2a63495a81bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:08.548473 kubelet[1895]: I0209 10:04:08.548453 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-kube-api-access-5v96t" (OuterVolumeSpecName: "kube-api-access-5v96t") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "kube-api-access-5v96t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:08.550099 kubelet[1895]: I0209 10:04:08.550069 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94612359-a2e2-4be9-916b-2a63495a81bd" (UID: "94612359-a2e2-4be9-916b-2a63495a81bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:08.553157 kubelet[1895]: E0209 10:04:08.553140 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:08.644810 kubelet[1895]: I0209 10:04:08.644711 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-config-path\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.644810 kubelet[1895]: I0209 10:04:08.644753 1895 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5v96t\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-kube-api-access-5v96t\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.644810 kubelet[1895]: I0209 10:04:08.644774 1895 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cni-path\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645513 kubelet[1895]: I0209 10:04:08.644793 1895 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-net\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645616 kubelet[1895]: I0209 10:04:08.645601 1895 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-xtables-lock\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645682 kubelet[1895]: I0209 10:04:08.645674 1895 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-bpf-maps\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645743 kubelet[1895]: I0209 10:04:08.645734 1895 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-etc-cni-netd\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645802 kubelet[1895]: I0209 10:04:08.645793 1895 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94612359-a2e2-4be9-916b-2a63495a81bd-hubble-tls\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645862 kubelet[1895]: I0209 10:04:08.645852 1895 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-host-proc-sys-kernel\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645925 kubelet[1895]: I0209 10:04:08.645915 1895 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-lib-modules\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.645987 kubelet[1895]: I0209 10:04:08.645978 1895 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94612359-a2e2-4be9-916b-2a63495a81bd-clustermesh-secrets\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.646047 kubelet[1895]: I0209 10:04:08.646038 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-run\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.646100 kubelet[1895]: I0209 10:04:08.646092 1895 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-hostproc\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.646156 kubelet[1895]: I0209 10:04:08.646148 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94612359-a2e2-4be9-916b-2a63495a81bd-cilium-cgroup\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:08.688608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82-rootfs.mount: Deactivated successfully. Feb 9 10:04:08.688715 systemd[1]: var-lib-kubelet-pods-94612359\x2da2e2\x2d4be9\x2d916b\x2d2a63495a81bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5v96t.mount: Deactivated successfully. Feb 9 10:04:08.688778 systemd[1]: var-lib-kubelet-pods-94612359\x2da2e2\x2d4be9\x2d916b\x2d2a63495a81bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:04:08.688834 systemd[1]: var-lib-kubelet-pods-94612359\x2da2e2\x2d4be9\x2d916b\x2d2a63495a81bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:08.726298 kubelet[1895]: I0209 10:04:08.726267 1895 scope.go:115] "RemoveContainer" containerID="09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da" Feb 9 10:04:08.729053 env[1372]: time="2024-02-09T10:04:08.729019528Z" level=info msg="RemoveContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\"" Feb 9 10:04:08.733034 systemd[1]: Removed slice kubepods-burstable-pod94612359_a2e2_4be9_916b_2a63495a81bd.slice. Feb 9 10:04:08.733119 systemd[1]: kubepods-burstable-pod94612359_a2e2_4be9_916b_2a63495a81bd.slice: Consumed 6.239s CPU time. Feb 9 10:04:08.744004 env[1372]: time="2024-02-09T10:04:08.743878646Z" level=info msg="RemoveContainer for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" returns successfully" Feb 9 10:04:08.744417 kubelet[1895]: I0209 10:04:08.744393 1895 scope.go:115] "RemoveContainer" containerID="f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40" Feb 9 10:04:08.745465 env[1372]: time="2024-02-09T10:04:08.745436889Z" level=info msg="RemoveContainer for \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\"" Feb 9 10:04:08.760654 env[1372]: time="2024-02-09T10:04:08.760620942Z" level=info msg="RemoveContainer for \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\" returns successfully" Feb 9 10:04:08.760819 kubelet[1895]: I0209 10:04:08.760794 1895 scope.go:115] "RemoveContainer" containerID="843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a" Feb 9 10:04:08.761925 env[1372]: time="2024-02-09T10:04:08.761663944Z" level=info msg="RemoveContainer for \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\"" Feb 9 10:04:08.778779 env[1372]: time="2024-02-09T10:04:08.778745814Z" level=info msg="RemoveContainer for \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\" returns successfully" Feb 9 10:04:08.779120 kubelet[1895]: I0209 10:04:08.779096 1895 scope.go:115] "RemoveContainer" containerID="8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568" Feb 9 10:04:08.780296 env[1372]: time="2024-02-09T10:04:08.780269899Z" level=info msg="RemoveContainer for \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\"" Feb 9 10:04:08.796044 env[1372]: time="2024-02-09T10:04:08.796009310Z" level=info msg="RemoveContainer for \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\" returns successfully" Feb 9 10:04:08.796317 kubelet[1895]: I0209 10:04:08.796289 1895 scope.go:115] "RemoveContainer" containerID="73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383" Feb 9 10:04:08.797343 env[1372]: time="2024-02-09T10:04:08.797313012Z" level=info msg="RemoveContainer for \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\"" Feb 9 10:04:08.810427 env[1372]: time="2024-02-09T10:04:08.810390184Z" level=info msg="RemoveContainer for \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\" returns successfully" Feb 9 10:04:08.810654 kubelet[1895]: I0209 10:04:08.810583 1895 scope.go:115] "RemoveContainer" containerID="09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da" Feb 9 10:04:08.810915 env[1372]: time="2024-02-09T10:04:08.810837190Z" level=error msg="ContainerStatus for \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\": not found" Feb 9 10:04:08.811182 kubelet[1895]: E0209 10:04:08.811166 1895 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\": not found" containerID="09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da" Feb 9 10:04:08.811352 kubelet[1895]: I0209 10:04:08.811327 1895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da} err="failed to get container status \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\": rpc error: code = NotFound desc = an error occurred when try to find container \"09969a8834eb88a42dc24d0b3ca76b8acdf21655c7421708ce112709fb6c22da\": not found" Feb 9 10:04:08.811439 kubelet[1895]: I0209 10:04:08.811429 1895 scope.go:115] "RemoveContainer" containerID="f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40" Feb 9 10:04:08.811724 env[1372]: time="2024-02-09T10:04:08.811672687Z" level=error msg="ContainerStatus for \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\": not found" Feb 9 10:04:08.811848 kubelet[1895]: E0209 10:04:08.811827 1895 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\": not found" containerID="f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40" Feb 9 10:04:08.811893 kubelet[1895]: I0209 10:04:08.811860 1895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40} err="failed to get container status \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9949a90432b540fb024a211bac8014f575ab17ce6aa2d15d2685fa91b36ca40\": not found" Feb 9 10:04:08.811893 kubelet[1895]: I0209 10:04:08.811872 1895 scope.go:115] "RemoveContainer" containerID="843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a" Feb 9 10:04:08.812165 env[1372]: time="2024-02-09T10:04:08.812120694Z" level=error msg="ContainerStatus for \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\": not found" Feb 9 10:04:08.812381 kubelet[1895]: E0209 10:04:08.812359 1895 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\": not found" containerID="843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a" Feb 9 10:04:08.812444 kubelet[1895]: I0209 10:04:08.812386 1895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a} err="failed to get container status \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"843dc6f8aa48588ddf9522a12b77f89135748721f3580288cccc1008c27e8b0a\": not found" Feb 9 10:04:08.812444 kubelet[1895]: I0209 10:04:08.812395 1895 scope.go:115] "RemoveContainer" containerID="8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568" Feb 9 10:04:08.812587 env[1372]: time="2024-02-09T10:04:08.812537502Z" level=error msg="ContainerStatus for \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\": not found" Feb 9 10:04:08.812693 kubelet[1895]: E0209 10:04:08.812673 1895 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\": not found" containerID="8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568" Feb 9 10:04:08.812753 kubelet[1895]: I0209 10:04:08.812702 1895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568} err="failed to get container status \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f8fb7df952bee0b43148baef79532e66999dfce8936a2b3b15778e5f4d50568\": not found" Feb 9 10:04:08.812753 kubelet[1895]: I0209 10:04:08.812713 1895 scope.go:115] "RemoveContainer" containerID="73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383" Feb 9 10:04:08.812969 env[1372]: time="2024-02-09T10:04:08.812924673Z" level=error msg="ContainerStatus for \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\": not found" Feb 9 10:04:08.813228 kubelet[1895]: E0209 10:04:08.813215 1895 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\": not found" containerID="73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383" Feb 9 10:04:08.813354 kubelet[1895]: I0209 10:04:08.813340 1895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383} err="failed to get container status \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\": rpc error: code = NotFound desc = an error occurred when try to find container \"73dc60cb3f419a56d72943d25756b40cda3419f8762618c2ea95fed7a8645383\": not found" Feb 9 10:04:09.512316 kubelet[1895]: E0209 10:04:09.512276 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:09.532795 env[1372]: time="2024-02-09T10:04:09.532747035Z" level=info msg="StopPodSandbox for \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\"" Feb 9 10:04:09.532895 env[1372]: time="2024-02-09T10:04:09.532845028Z" level=info msg="TearDown network for sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" successfully" Feb 9 10:04:09.532934 env[1372]: time="2024-02-09T10:04:09.532890065Z" level=info msg="StopPodSandbox for \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" returns successfully" Feb 9 10:04:09.533253 env[1372]: time="2024-02-09T10:04:09.533226440Z" level=info msg="RemovePodSandbox for \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\"" Feb 9 10:04:09.533380 env[1372]: time="2024-02-09T10:04:09.533349151Z" level=info msg="Forcibly stopping sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\"" Feb 9 10:04:09.533512 env[1372]: time="2024-02-09T10:04:09.533468662Z" level=info msg="TearDown network for sandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" successfully" Feb 9 10:04:09.553843 env[1372]: time="2024-02-09T10:04:09.553766027Z" level=info msg="RemovePodSandbox \"015908771694ce016ed6a89451ffa66b25d50ca5fb7423d798037a4610b0ed82\" returns successfully" Feb 9 10:04:09.556519 kubelet[1895]: E0209 10:04:09.556472 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:09.579336 kubelet[1895]: I0209 10:04:09.579315 1895 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=94612359-a2e2-4be9-916b-2a63495a81bd path="/var/lib/kubelet/pods/94612359-a2e2-4be9-916b-2a63495a81bd/volumes" Feb 9 10:04:09.610545 kubelet[1895]: E0209 10:04:09.610523 1895 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:04:10.557367 kubelet[1895]: E0209 10:04:10.557333 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:11.558123 kubelet[1895]: E0209 10:04:11.558083 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:12.548470 kubelet[1895]: I0209 10:04:12.548440 1895 setters.go:548] "Node became not ready" node="10.200.20.13" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:04:12.548398019 +0000 UTC m=+63.754192375 LastTransitionTime:2024-02-09 10:04:12.548398019 +0000 UTC m=+63.754192375 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:04:12.558997 kubelet[1895]: E0209 10:04:12.558976 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:12.709124 kubelet[1895]: I0209 10:04:12.709096 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:12.709331 kubelet[1895]: E0209 10:04:12.709319 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="mount-cgroup" Feb 9 10:04:12.709407 kubelet[1895]: E0209 10:04:12.709399 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="mount-bpf-fs" Feb 9 10:04:12.709473 kubelet[1895]: E0209 10:04:12.709454 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="cilium-agent" Feb 9 10:04:12.709581 kubelet[1895]: E0209 10:04:12.709572 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="apply-sysctl-overwrites" Feb 9 10:04:12.709643 kubelet[1895]: E0209 10:04:12.709625 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="clean-cilium-state" Feb 9 10:04:12.709716 kubelet[1895]: I0209 10:04:12.709701 1895 memory_manager.go:346] "RemoveStaleState removing state" podUID="94612359-a2e2-4be9-916b-2a63495a81bd" containerName="cilium-agent" Feb 9 10:04:12.714048 systemd[1]: Created slice kubepods-burstable-pod5ab85291_d64e_4fb3_a87a_33d82564e811.slice. Feb 9 10:04:12.729702 kubelet[1895]: I0209 10:04:12.729670 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:12.734429 systemd[1]: Created slice kubepods-besteffort-podf8548689_5be5_4091_ae2e_6ce818337c80.slice. Feb 9 10:04:12.866080 kubelet[1895]: I0209 10:04:12.865973 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-run\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866287 kubelet[1895]: I0209 10:04:12.866257 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-bpf-maps\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866410 kubelet[1895]: I0209 10:04:12.866399 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-cgroup\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866555 kubelet[1895]: I0209 10:04:12.866533 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cni-path\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866664 kubelet[1895]: I0209 10:04:12.866654 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-ipsec-secrets\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866785 kubelet[1895]: I0209 10:04:12.866773 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-hubble-tls\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866906 kubelet[1895]: I0209 10:04:12.866886 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8548689-5be5-4091-ae2e-6ce818337c80-cilium-config-path\") pod \"cilium-operator-574c4bb98d-rsnhk\" (UID: \"f8548689-5be5-4091-ae2e-6ce818337c80\") " pod="kube-system/cilium-operator-574c4bb98d-rsnhk" Feb 9 10:04:12.866957 kubelet[1895]: I0209 10:04:12.866924 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-config-path\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.866957 kubelet[1895]: I0209 10:04:12.866953 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-net\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867011 kubelet[1895]: I0209 10:04:12.866977 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqvq\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-kube-api-access-xpqvq\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867011 kubelet[1895]: I0209 10:04:12.867006 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfrd\" (UniqueName: \"kubernetes.io/projected/f8548689-5be5-4091-ae2e-6ce818337c80-kube-api-access-hpfrd\") pod \"cilium-operator-574c4bb98d-rsnhk\" (UID: \"f8548689-5be5-4091-ae2e-6ce818337c80\") " pod="kube-system/cilium-operator-574c4bb98d-rsnhk" Feb 9 10:04:12.867059 kubelet[1895]: I0209 10:04:12.867027 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-etc-cni-netd\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867059 kubelet[1895]: I0209 10:04:12.867058 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-xtables-lock\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867104 kubelet[1895]: I0209 10:04:12.867079 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-clustermesh-secrets\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867131 kubelet[1895]: I0209 10:04:12.867118 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-kernel\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867156 kubelet[1895]: I0209 10:04:12.867153 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-hostproc\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:12.867179 kubelet[1895]: I0209 10:04:12.867171 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-lib-modules\") pod \"cilium-wh4wm\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " pod="kube-system/cilium-wh4wm" Feb 9 10:04:13.019506 env[1372]: time="2024-02-09T10:04:13.019453833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wh4wm,Uid:5ab85291-d64e-4fb3-a87a-33d82564e811,Namespace:kube-system,Attempt:0,}" Feb 9 10:04:13.037551 env[1372]: time="2024-02-09T10:04:13.037509586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rsnhk,Uid:f8548689-5be5-4091-ae2e-6ce818337c80,Namespace:kube-system,Attempt:0,}" Feb 9 10:04:13.090545 env[1372]: time="2024-02-09T10:04:13.090474529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:13.091091 env[1372]: time="2024-02-09T10:04:13.090527525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:13.091091 env[1372]: time="2024-02-09T10:04:13.090950895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:13.091296 env[1372]: time="2024-02-09T10:04:13.091201797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8 pid=3390 runtime=io.containerd.runc.v2 Feb 9 10:04:13.101558 systemd[1]: Started cri-containerd-1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8.scope. Feb 9 10:04:13.125447 env[1372]: time="2024-02-09T10:04:13.124251081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wh4wm,Uid:5ab85291-d64e-4fb3-a87a-33d82564e811,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\"" Feb 9 10:04:13.128961 env[1372]: time="2024-02-09T10:04:13.128892030Z" level=info msg="CreateContainer within sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:04:13.135815 env[1372]: time="2024-02-09T10:04:13.135650668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:13.135815 env[1372]: time="2024-02-09T10:04:13.135686105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:13.135815 env[1372]: time="2024-02-09T10:04:13.135696745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:13.136066 env[1372]: time="2024-02-09T10:04:13.136020122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41c7fcf2d25373d0f940269a2fb2d3aae55562bc9f8872a18c1fe8a5d8152dcc pid=3430 runtime=io.containerd.runc.v2 Feb 9 10:04:13.147260 systemd[1]: Started cri-containerd-41c7fcf2d25373d0f940269a2fb2d3aae55562bc9f8872a18c1fe8a5d8152dcc.scope. Feb 9 10:04:13.181122 env[1372]: time="2024-02-09T10:04:13.181078589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rsnhk,Uid:f8548689-5be5-4091-ae2e-6ce818337c80,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c7fcf2d25373d0f940269a2fb2d3aae55562bc9f8872a18c1fe8a5d8152dcc\"" Feb 9 10:04:13.182573 env[1372]: time="2024-02-09T10:04:13.182446051Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:04:13.209583 env[1372]: time="2024-02-09T10:04:13.209547999Z" level=info msg="CreateContainer within sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\"" Feb 9 10:04:13.210262 env[1372]: time="2024-02-09T10:04:13.210236470Z" level=info msg="StartContainer for \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\"" Feb 9 10:04:13.223387 systemd[1]: Started cri-containerd-b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e.scope. Feb 9 10:04:13.232699 systemd[1]: cri-containerd-b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e.scope: Deactivated successfully. Feb 9 10:04:13.232972 systemd[1]: Stopped cri-containerd-b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e.scope. Feb 9 10:04:13.307660 env[1372]: time="2024-02-09T10:04:13.307606567Z" level=info msg="shim disconnected" id=b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e Feb 9 10:04:13.307660 env[1372]: time="2024-02-09T10:04:13.307657523Z" level=warning msg="cleaning up after shim disconnected" id=b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e namespace=k8s.io Feb 9 10:04:13.307660 env[1372]: time="2024-02-09T10:04:13.307667882Z" level=info msg="cleaning up dead shim" Feb 9 10:04:13.314971 env[1372]: time="2024-02-09T10:04:13.314913686Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3492 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:04:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 10:04:13.315260 env[1372]: time="2024-02-09T10:04:13.315166868Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 9 10:04:13.316368 env[1372]: time="2024-02-09T10:04:13.316334584Z" level=error msg="Failed to pipe stdout of container \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\"" error="reading from a closed fifo" Feb 9 10:04:13.316534 env[1372]: time="2024-02-09T10:04:13.316467815Z" level=error msg="Failed to pipe stderr of container \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\"" error="reading from a closed fifo" Feb 9 10:04:13.326911 env[1372]: time="2024-02-09T10:04:13.326847515Z" level=error msg="StartContainer for \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 10:04:13.327152 kubelet[1895]: E0209 10:04:13.327121 1895 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e" Feb 9 10:04:13.327515 kubelet[1895]: E0209 10:04:13.327472 1895 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 10:04:13.327515 kubelet[1895]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 10:04:13.327515 kubelet[1895]: rm /hostbin/cilium-mount Feb 9 10:04:13.327617 kubelet[1895]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xpqvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-wh4wm_kube-system(5ab85291-d64e-4fb3-a87a-33d82564e811): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 10:04:13.327617 kubelet[1895]: E0209 10:04:13.327559 1895 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wh4wm" podUID=5ab85291-d64e-4fb3-a87a-33d82564e811 Feb 9 10:04:13.559939 kubelet[1895]: E0209 10:04:13.559869 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:13.736608 env[1372]: time="2024-02-09T10:04:13.736415311Z" level=info msg="StopPodSandbox for \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\"" Feb 9 10:04:13.736954 env[1372]: time="2024-02-09T10:04:13.736920035Z" level=info msg="Container to stop \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:13.742441 systemd[1]: cri-containerd-1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8.scope: Deactivated successfully. Feb 9 10:04:13.782749 env[1372]: time="2024-02-09T10:04:13.782699491Z" level=info msg="shim disconnected" id=1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8 Feb 9 10:04:13.782749 env[1372]: time="2024-02-09T10:04:13.782744087Z" level=warning msg="cleaning up after shim disconnected" id=1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8 namespace=k8s.io Feb 9 10:04:13.782749 env[1372]: time="2024-02-09T10:04:13.782753487Z" level=info msg="cleaning up dead shim" Feb 9 10:04:13.790334 env[1372]: time="2024-02-09T10:04:13.790287710Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" Feb 9 10:04:13.790664 env[1372]: time="2024-02-09T10:04:13.790633525Z" level=info msg="TearDown network for sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" successfully" Feb 9 10:04:13.790717 env[1372]: time="2024-02-09T10:04:13.790663163Z" level=info msg="StopPodSandbox for \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" returns successfully" Feb 9 10:04:13.976013 kubelet[1895]: I0209 10:04:13.975382 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-hubble-tls\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.976013 kubelet[1895]: I0209 10:04:13.975539 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-config-path\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.976013 kubelet[1895]: I0209 10:04:13.975564 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-run\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.976013 kubelet[1895]: W0209 10:04:13.975745 1895 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5ab85291-d64e-4fb3-a87a-33d82564e811/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:04:13.977573 kubelet[1895]: I0209 10:04:13.977555 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-bpf-maps\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.977886 kubelet[1895]: I0209 10:04:13.977872 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-ipsec-secrets\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.977999 kubelet[1895]: I0209 10:04:13.977988 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cni-path\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978081 kubelet[1895]: I0209 10:04:13.978072 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-net\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978158 kubelet[1895]: I0209 10:04:13.978149 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-kernel\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978260 kubelet[1895]: I0209 10:04:13.978251 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpqvq\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-kube-api-access-xpqvq\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978340 kubelet[1895]: I0209 10:04:13.978330 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-cgroup\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978413 kubelet[1895]: I0209 10:04:13.978403 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-xtables-lock\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978506 kubelet[1895]: I0209 10:04:13.978477 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-hostproc\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978592 kubelet[1895]: I0209 10:04:13.978581 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-clustermesh-secrets\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978663 kubelet[1895]: I0209 10:04:13.978654 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-etc-cni-netd\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978731 kubelet[1895]: I0209 10:04:13.978723 1895 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-lib-modules\") pod \"5ab85291-d64e-4fb3-a87a-33d82564e811\" (UID: \"5ab85291-d64e-4fb3-a87a-33d82564e811\") " Feb 9 10:04:13.978827 kubelet[1895]: I0209 10:04:13.978814 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.978889 kubelet[1895]: I0209 10:04:13.977762 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.978941 kubelet[1895]: I0209 10:04:13.977779 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.978996 kubelet[1895]: I0209 10:04:13.978904 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:04:13.979079 kubelet[1895]: I0209 10:04:13.979065 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979172 kubelet[1895]: I0209 10:04:13.979159 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979245 kubelet[1895]: I0209 10:04:13.979233 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979499 kubelet[1895]: I0209 10:04:13.979465 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979607 kubelet[1895]: I0209 10:04:13.979594 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979833 kubelet[1895]: I0209 10:04:13.979817 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.979943 kubelet[1895]: I0209 10:04:13.979929 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.983992 systemd[1]: var-lib-kubelet-pods-5ab85291\x2dd64e\x2d4fb3\x2da87a\x2d33d82564e811-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:04:13.985010 kubelet[1895]: I0209 10:04:13.984990 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:13.986274 systemd[1]: var-lib-kubelet-pods-5ab85291\x2dd64e\x2d4fb3\x2da87a\x2d33d82564e811-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxpqvq.mount: Deactivated successfully. Feb 9 10:04:13.986351 systemd[1]: var-lib-kubelet-pods-5ab85291\x2dd64e\x2d4fb3\x2da87a\x2d33d82564e811-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:13.988595 kubelet[1895]: I0209 10:04:13.988560 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:13.989817 systemd[1]: var-lib-kubelet-pods-5ab85291\x2dd64e\x2d4fb3\x2da87a\x2d33d82564e811-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:13.990469 kubelet[1895]: I0209 10:04:13.990445 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-kube-api-access-xpqvq" (OuterVolumeSpecName: "kube-api-access-xpqvq") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "kube-api-access-xpqvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:13.991105 kubelet[1895]: I0209 10:04:13.991086 1895 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ab85291-d64e-4fb3-a87a-33d82564e811" (UID: "5ab85291-d64e-4fb3-a87a-33d82564e811"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:14.079169 kubelet[1895]: I0209 10:04:14.079141 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-cgroup\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079331 kubelet[1895]: I0209 10:04:14.079317 1895 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-xtables-lock\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079414 kubelet[1895]: I0209 10:04:14.079404 1895 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-hostproc\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079474 kubelet[1895]: I0209 10:04:14.079466 1895 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-lib-modules\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079579 kubelet[1895]: I0209 10:04:14.079568 1895 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-clustermesh-secrets\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079647 kubelet[1895]: I0209 10:04:14.079638 1895 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-etc-cni-netd\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079703 kubelet[1895]: I0209 10:04:14.079695 1895 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cni-path\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079761 kubelet[1895]: I0209 10:04:14.079752 1895 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-hubble-tls\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079821 kubelet[1895]: I0209 10:04:14.079812 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-config-path\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079875 kubelet[1895]: I0209 10:04:14.079867 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-run\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079931 kubelet[1895]: I0209 10:04:14.079923 1895 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-bpf-maps\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.079989 kubelet[1895]: I0209 10:04:14.079981 1895 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ab85291-d64e-4fb3-a87a-33d82564e811-cilium-ipsec-secrets\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.080068 kubelet[1895]: I0209 10:04:14.080059 1895 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-net\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.080129 kubelet[1895]: I0209 10:04:14.080120 1895 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ab85291-d64e-4fb3-a87a-33d82564e811-host-proc-sys-kernel\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.080189 kubelet[1895]: I0209 10:04:14.080181 1895 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xpqvq\" (UniqueName: \"kubernetes.io/projected/5ab85291-d64e-4fb3-a87a-33d82564e811-kube-api-access-xpqvq\") on node \"10.200.20.13\" DevicePath \"\"" Feb 9 10:04:14.560841 kubelet[1895]: E0209 10:04:14.560802 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:14.612022 kubelet[1895]: E0209 10:04:14.611984 1895 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:04:14.740979 kubelet[1895]: I0209 10:04:14.740889 1895 scope.go:115] "RemoveContainer" containerID="b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e" Feb 9 10:04:14.744271 systemd[1]: Removed slice kubepods-burstable-pod5ab85291_d64e_4fb3_a87a_33d82564e811.slice. Feb 9 10:04:14.745784 env[1372]: time="2024-02-09T10:04:14.745753069Z" level=info msg="RemoveContainer for \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\"" Feb 9 10:04:14.784842 env[1372]: time="2024-02-09T10:04:14.784803395Z" level=info msg="RemoveContainer for \"b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e\" returns successfully" Feb 9 10:04:14.824451 kubelet[1895]: I0209 10:04:14.824349 1895 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:14.824978 kubelet[1895]: E0209 10:04:14.824950 1895 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ab85291-d64e-4fb3-a87a-33d82564e811" containerName="mount-cgroup" Feb 9 10:04:14.825022 kubelet[1895]: I0209 10:04:14.824998 1895 memory_manager.go:346] "RemoveStaleState removing state" podUID="5ab85291-d64e-4fb3-a87a-33d82564e811" containerName="mount-cgroup" Feb 9 10:04:14.830374 systemd[1]: Created slice kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice. Feb 9 10:04:14.884675 kubelet[1895]: I0209 10:04:14.884570 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-host-proc-sys-net\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884675 kubelet[1895]: I0209 10:04:14.884611 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dac50a4-3378-4e9b-b4e5-9eace99f6127-hubble-tls\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884675 kubelet[1895]: I0209 10:04:14.884632 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-cilium-cgroup\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884675 kubelet[1895]: I0209 10:04:14.884653 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-lib-modules\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884732 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-host-proc-sys-kernel\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884770 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-cilium-run\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884814 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-bpf-maps\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884845 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-xtables-lock\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884865 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d72dg\" (UniqueName: \"kubernetes.io/projected/8dac50a4-3378-4e9b-b4e5-9eace99f6127-kube-api-access-d72dg\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.884901 kubelet[1895]: I0209 10:04:14.884893 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-hostproc\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.885038 kubelet[1895]: I0209 10:04:14.884913 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-cni-path\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.885038 kubelet[1895]: I0209 10:04:14.884934 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac50a4-3378-4e9b-b4e5-9eace99f6127-etc-cni-netd\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.885038 kubelet[1895]: I0209 10:04:14.884975 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dac50a4-3378-4e9b-b4e5-9eace99f6127-clustermesh-secrets\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.885038 kubelet[1895]: I0209 10:04:14.884997 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dac50a4-3378-4e9b-b4e5-9eace99f6127-cilium-config-path\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:14.885038 kubelet[1895]: I0209 10:04:14.885037 1895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8dac50a4-3378-4e9b-b4e5-9eace99f6127-cilium-ipsec-secrets\") pod \"cilium-gl728\" (UID: \"8dac50a4-3378-4e9b-b4e5-9eace99f6127\") " pod="kube-system/cilium-gl728" Feb 9 10:04:15.139219 env[1372]: time="2024-02-09T10:04:15.138838363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gl728,Uid:8dac50a4-3378-4e9b-b4e5-9eace99f6127,Namespace:kube-system,Attempt:0,}" Feb 9 10:04:15.212219 env[1372]: time="2024-02-09T10:04:15.205821327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:15.212219 env[1372]: time="2024-02-09T10:04:15.205857605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:15.212219 env[1372]: time="2024-02-09T10:04:15.205867204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:15.212219 env[1372]: time="2024-02-09T10:04:15.205975436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32 pid=3550 runtime=io.containerd.runc.v2 Feb 9 10:04:15.234145 systemd[1]: Started cri-containerd-f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32.scope. Feb 9 10:04:15.262650 env[1372]: time="2024-02-09T10:04:15.262612123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gl728,Uid:8dac50a4-3378-4e9b-b4e5-9eace99f6127,Namespace:kube-system,Attempt:0,} returns sandbox id \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\"" Feb 9 10:04:15.265373 env[1372]: time="2024-02-09T10:04:15.265344492Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:04:15.319253 env[1372]: time="2024-02-09T10:04:15.319199573Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31\"" Feb 9 10:04:15.320032 env[1372]: time="2024-02-09T10:04:15.320008357Z" level=info msg="StartContainer for \"35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31\"" Feb 9 10:04:15.333755 systemd[1]: Started cri-containerd-35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31.scope. Feb 9 10:04:15.364152 systemd[1]: cri-containerd-35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31.scope: Deactivated successfully. Feb 9 10:04:15.367746 env[1372]: time="2024-02-09T10:04:15.367697588Z" level=info msg="StartContainer for \"35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31\" returns successfully" Feb 9 10:04:15.591175 kubelet[1895]: E0209 10:04:15.561999 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:15.591175 kubelet[1895]: I0209 10:04:15.590891 1895 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5ab85291-d64e-4fb3-a87a-33d82564e811 path="/var/lib/kubelet/pods/5ab85291-d64e-4fb3-a87a-33d82564e811/volumes" Feb 9 10:04:15.697257 env[1372]: time="2024-02-09T10:04:15.697212548Z" level=info msg="shim disconnected" id=35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31 Feb 9 10:04:15.697445 env[1372]: time="2024-02-09T10:04:15.697427253Z" level=warning msg="cleaning up after shim disconnected" id=35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31 namespace=k8s.io Feb 9 10:04:15.697547 env[1372]: time="2024-02-09T10:04:15.697533325Z" level=info msg="cleaning up dead shim" Feb 9 10:04:15.704067 env[1372]: time="2024-02-09T10:04:15.704024952Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Feb 9 10:04:15.745903 env[1372]: time="2024-02-09T10:04:15.745867752Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:04:15.896247 env[1372]: time="2024-02-09T10:04:15.895759809Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:15.896521 env[1372]: time="2024-02-09T10:04:15.896467920Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798\"" Feb 9 10:04:15.897349 env[1372]: time="2024-02-09T10:04:15.897318420Z" level=info msg="StartContainer for \"9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798\"" Feb 9 10:04:15.911108 systemd[1]: Started cri-containerd-9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798.scope. Feb 9 10:04:15.914963 env[1372]: time="2024-02-09T10:04:15.914931391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:15.920616 env[1372]: time="2024-02-09T10:04:15.920575037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:04:15.922573 env[1372]: time="2024-02-09T10:04:15.922540100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:04:15.924690 env[1372]: time="2024-02-09T10:04:15.924664312Z" level=info msg="CreateContainer within sandbox \"41c7fcf2d25373d0f940269a2fb2d3aae55562bc9f8872a18c1fe8a5d8152dcc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:04:15.949212 systemd[1]: cri-containerd-9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798.scope: Deactivated successfully. Feb 9 10:04:15.952086 env[1372]: time="2024-02-09T10:04:15.952052760Z" level=info msg="StartContainer for \"9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798\" returns successfully" Feb 9 10:04:15.996087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086212405.mount: Deactivated successfully. Feb 9 10:04:16.001367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254398487.mount: Deactivated successfully. Feb 9 10:04:16.003041 env[1372]: time="2024-02-09T10:04:16.002999205Z" level=info msg="shim disconnected" id=9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798 Feb 9 10:04:16.003246 env[1372]: time="2024-02-09T10:04:16.003227629Z" level=warning msg="cleaning up after shim disconnected" id=9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798 namespace=k8s.io Feb 9 10:04:16.003315 env[1372]: time="2024-02-09T10:04:16.003301264Z" level=info msg="cleaning up dead shim" Feb 9 10:04:16.010654 env[1372]: time="2024-02-09T10:04:16.010616479Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3696 runtime=io.containerd.runc.v2\n" Feb 9 10:04:16.033044 env[1372]: time="2024-02-09T10:04:16.033001652Z" level=info msg="CreateContainer within sandbox \"41c7fcf2d25373d0f940269a2fb2d3aae55562bc9f8872a18c1fe8a5d8152dcc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b2ba14d4a511903cd8050e5a71a0e88a8864bb862595fd82468fd38d2e8c23ea\"" Feb 9 10:04:16.034094 env[1372]: time="2024-02-09T10:04:16.034053940Z" level=info msg="StartContainer for \"b2ba14d4a511903cd8050e5a71a0e88a8864bb862595fd82468fd38d2e8c23ea\"" Feb 9 10:04:16.048568 systemd[1]: Started cri-containerd-b2ba14d4a511903cd8050e5a71a0e88a8864bb862595fd82468fd38d2e8c23ea.scope. Feb 9 10:04:16.078118 env[1372]: time="2024-02-09T10:04:16.078048340Z" level=info msg="StartContainer for \"b2ba14d4a511903cd8050e5a71a0e88a8864bb862595fd82468fd38d2e8c23ea\" returns successfully" Feb 9 10:04:16.412531 kubelet[1895]: W0209 10:04:16.412097 1895 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ab85291_d64e_4fb3_a87a_33d82564e811.slice/cri-containerd-b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e.scope WatchSource:0}: container "b5fa95f7d73170460f550e4045c25509867146f53420a4bd8922a64c57e4b34e" in namespace "k8s.io": not found Feb 9 10:04:16.562199 kubelet[1895]: E0209 10:04:16.562137 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:16.750824 env[1372]: time="2024-02-09T10:04:16.750728829Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:04:16.773840 kubelet[1895]: I0209 10:04:16.773721 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-rsnhk" podStartSLOduration=2.033045313 podCreationTimestamp="2024-02-09 10:04:12 +0000 UTC" firstStartedPulling="2024-02-09 10:04:13.182225787 +0000 UTC m=+64.388020143" lastFinishedPulling="2024-02-09 10:04:15.922866717 +0000 UTC m=+67.128661073" observedRunningTime="2024-02-09 10:04:16.768120788 +0000 UTC m=+67.973915144" watchObservedRunningTime="2024-02-09 10:04:16.773686243 +0000 UTC m=+67.979480559" Feb 9 10:04:16.839418 env[1372]: time="2024-02-09T10:04:16.839366986Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac\"" Feb 9 10:04:16.840152 env[1372]: time="2024-02-09T10:04:16.840119614Z" level=info msg="StartContainer for \"79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac\"" Feb 9 10:04:16.854063 systemd[1]: Started cri-containerd-79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac.scope. Feb 9 10:04:16.884306 systemd[1]: cri-containerd-79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac.scope: Deactivated successfully. Feb 9 10:04:16.887836 env[1372]: time="2024-02-09T10:04:16.887798960Z" level=info msg="StartContainer for \"79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac\" returns successfully" Feb 9 10:04:16.930125 env[1372]: time="2024-02-09T10:04:16.930080919Z" level=info msg="shim disconnected" id=79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac Feb 9 10:04:16.930365 env[1372]: time="2024-02-09T10:04:16.930346180Z" level=warning msg="cleaning up after shim disconnected" id=79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac namespace=k8s.io Feb 9 10:04:16.930426 env[1372]: time="2024-02-09T10:04:16.930414416Z" level=info msg="cleaning up dead shim" Feb 9 10:04:16.937578 env[1372]: time="2024-02-09T10:04:16.937537364Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Feb 9 10:04:17.563241 kubelet[1895]: E0209 10:04:17.563202 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:17.755351 env[1372]: time="2024-02-09T10:04:17.755303233Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:04:17.809959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764574011.mount: Deactivated successfully. Feb 9 10:04:17.814364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638313595.mount: Deactivated successfully. Feb 9 10:04:17.846788 env[1372]: time="2024-02-09T10:04:17.846740660Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372\"" Feb 9 10:04:17.847678 env[1372]: time="2024-02-09T10:04:17.847644158Z" level=info msg="StartContainer for \"d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372\"" Feb 9 10:04:17.861537 systemd[1]: Started cri-containerd-d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372.scope. Feb 9 10:04:17.885564 systemd[1]: cri-containerd-d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372.scope: Deactivated successfully. Feb 9 10:04:17.887741 env[1372]: time="2024-02-09T10:04:17.887649462Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice/cri-containerd-d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372.scope/memory.events\": no such file or directory" Feb 9 10:04:17.901078 env[1372]: time="2024-02-09T10:04:17.901037186Z" level=info msg="StartContainer for \"d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372\" returns successfully" Feb 9 10:04:17.940858 env[1372]: time="2024-02-09T10:04:17.940797347Z" level=info msg="shim disconnected" id=d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372 Feb 9 10:04:17.940858 env[1372]: time="2024-02-09T10:04:17.940855703Z" level=warning msg="cleaning up after shim disconnected" id=d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372 namespace=k8s.io Feb 9 10:04:17.940858 env[1372]: time="2024-02-09T10:04:17.940865302Z" level=info msg="cleaning up dead shim" Feb 9 10:04:17.948031 env[1372]: time="2024-02-09T10:04:17.947988855Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3847 runtime=io.containerd.runc.v2\n" Feb 9 10:04:18.563551 kubelet[1895]: E0209 10:04:18.563511 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:18.759345 env[1372]: time="2024-02-09T10:04:18.759296641Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:04:18.813521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153150324.mount: Deactivated successfully. Feb 9 10:04:18.818809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707832637.mount: Deactivated successfully. Feb 9 10:04:18.856716 env[1372]: time="2024-02-09T10:04:18.856648729Z" level=info msg="CreateContainer within sandbox \"f52181d3064d55e8b1f4f83c9b1b7ac427f882d94ccad6947f844f1211601e32\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe\"" Feb 9 10:04:18.857403 env[1372]: time="2024-02-09T10:04:18.857371200Z" level=info msg="StartContainer for \"b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe\"" Feb 9 10:04:18.871550 systemd[1]: Started cri-containerd-b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe.scope. Feb 9 10:04:18.905865 env[1372]: time="2024-02-09T10:04:18.905820879Z" level=info msg="StartContainer for \"b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe\" returns successfully" Feb 9 10:04:19.225767 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:04:19.524920 kubelet[1895]: W0209 10:04:19.524883 1895 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice/cri-containerd-35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31.scope WatchSource:0}: task 35e8a6d50c001d0f5435d81196af2a999dd2a73443802938992ac26a64d9ac31 not found: not found Feb 9 10:04:19.564539 kubelet[1895]: E0209 10:04:19.564509 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:19.774564 kubelet[1895]: I0209 10:04:19.774533 1895 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gl728" podStartSLOduration=5.774498239 podCreationTimestamp="2024-02-09 10:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:04:19.774319411 +0000 UTC m=+70.980113767" watchObservedRunningTime="2024-02-09 10:04:19.774498239 +0000 UTC m=+70.980292595" Feb 9 10:04:20.434036 systemd[1]: run-containerd-runc-k8s.io-b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe-runc.cswyui.mount: Deactivated successfully. Feb 9 10:04:20.565017 kubelet[1895]: E0209 10:04:20.564976 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:21.565572 kubelet[1895]: E0209 10:04:21.565530 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:21.659717 systemd-networkd[1525]: lxc_health: Link UP Feb 9 10:04:21.676081 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:04:21.676827 systemd-networkd[1525]: lxc_health: Gained carrier Feb 9 10:04:22.566323 kubelet[1895]: E0209 10:04:22.566278 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:22.573257 systemd[1]: run-containerd-runc-k8s.io-b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe-runc.aw50mi.mount: Deactivated successfully. Feb 9 10:04:22.631373 kubelet[1895]: W0209 10:04:22.631308 1895 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice/cri-containerd-9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798.scope WatchSource:0}: task 9de67a2f4b85158c9088d189f801b47e4b4fa96cef7a779dffdd5ae6252a9798 not found: not found Feb 9 10:04:22.850636 systemd-networkd[1525]: lxc_health: Gained IPv6LL Feb 9 10:04:23.566579 kubelet[1895]: E0209 10:04:23.566547 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:24.567508 kubelet[1895]: E0209 10:04:24.567462 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:24.780118 systemd[1]: run-containerd-runc-k8s.io-b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe-runc.UiDBJH.mount: Deactivated successfully. Feb 9 10:04:25.568669 kubelet[1895]: E0209 10:04:25.568637 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:25.754369 kubelet[1895]: W0209 10:04:25.754333 1895 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice/cri-containerd-79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac.scope WatchSource:0}: task 79d662082df53e4b90db8215c59d126367753a096e5b32ef5c9407b2ea5365ac not found: not found Feb 9 10:04:26.569365 kubelet[1895]: E0209 10:04:26.569333 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:26.911104 systemd[1]: run-containerd-runc-k8s.io-b84dab9e12523d4fc43f123a72101da540d2bfacf1f53f2999f878671e286abe-runc.FSXVsW.mount: Deactivated successfully. Feb 9 10:04:27.570178 kubelet[1895]: E0209 10:04:27.570135 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:28.570623 kubelet[1895]: E0209 10:04:28.570583 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:28.861989 kubelet[1895]: W0209 10:04:28.861689 1895 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dac50a4_3378_4e9b_b4e5_9eace99f6127.slice/cri-containerd-d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372.scope WatchSource:0}: task d8e57d7a1690de851563f2429777d436a827745a99e8c9e3e41875301e4aa372 not found: not found Feb 9 10:04:29.511804 kubelet[1895]: E0209 10:04:29.511755 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:29.571316 kubelet[1895]: E0209 10:04:29.571277 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:30.571755 kubelet[1895]: E0209 10:04:30.571717 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:31.572020 kubelet[1895]: E0209 10:04:31.571982 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:32.572614 kubelet[1895]: E0209 10:04:32.572562 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:33.573358 kubelet[1895]: E0209 10:04:33.573324 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:34.573471 kubelet[1895]: E0209 10:04:34.573420 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:35.573758 kubelet[1895]: E0209 10:04:35.573732 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:36.574911 kubelet[1895]: E0209 10:04:36.574874 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:37.575724 kubelet[1895]: E0209 10:04:37.575693 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:38.576150 kubelet[1895]: E0209 10:04:38.576115 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:39.576546 kubelet[1895]: E0209 10:04:39.576514 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:40.576793 kubelet[1895]: E0209 10:04:40.576756 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:41.577437 kubelet[1895]: E0209 10:04:41.577398 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:42.578149 kubelet[1895]: E0209 10:04:42.578118 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:42.943238 kubelet[1895]: E0209 10:04:42.942940 1895 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:04:43.578707 kubelet[1895]: E0209 10:04:43.578403 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:44.579811 kubelet[1895]: E0209 10:04:44.579767 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:45.580625 kubelet[1895]: E0209 10:04:45.580580 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:46.392770 kubelet[1895]: E0209 10:04:46.392742 1895 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.16:35482->10.200.20.27:2379: read: connection timed out" Feb 9 10:04:46.581471 kubelet[1895]: E0209 10:04:46.581443 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:47.582645 kubelet[1895]: E0209 10:04:47.582611 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:48.584018 kubelet[1895]: E0209 10:04:48.583982 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:49.511904 kubelet[1895]: E0209 10:04:49.511866 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:49.584872 kubelet[1895]: E0209 10:04:49.584846 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:50.586165 kubelet[1895]: E0209 10:04:50.586122 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:51.586641 kubelet[1895]: E0209 10:04:51.586612 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:52.587108 kubelet[1895]: E0209 10:04:52.587070 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:53.100202 kubelet[1895]: E0209 10:04:53.099897 1895 request.go:1092] Unexpected error when reading response body: context deadline exceeded (Client.Timeout or context cancellation while reading body) Feb 9 10:04:53.100202 kubelet[1895]: E0209 10:04:53.100054 1895 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T10:04:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T10:04:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T10:04:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T10:04:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88\\\",\\\"registry.k8s.io/kube-proxy:v1.27.10\\\"],\\\"sizeBytes\\\":23037360},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"10.200.20.13\": unexpected error when reading response body. Please retry. Original error: context deadline exceeded (Client.Timeout or context cancellation while reading body)" Feb 9 10:04:53.365979 kubelet[1895]: E0209 10:04:53.365579 1895 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.13\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.16:35370->10.200.20.27:2379: read: connection timed out" Feb 9 10:04:53.588206 kubelet[1895]: E0209 10:04:53.588178 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:54.588564 kubelet[1895]: E0209 10:04:54.588526 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:55.588854 kubelet[1895]: E0209 10:04:55.588823 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:56.393653 kubelet[1895]: E0209 10:04:56.393613 1895 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:04:56.589586 kubelet[1895]: E0209 10:04:56.589551 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:57.590173 kubelet[1895]: E0209 10:04:57.590143 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:58.590295 kubelet[1895]: E0209 10:04:58.590247 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:04:59.591443 kubelet[1895]: E0209 10:04:59.591409 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:00.591737 kubelet[1895]: E0209 10:05:00.591702 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:01.592841 kubelet[1895]: E0209 10:05:01.592801 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:02.593655 kubelet[1895]: E0209 10:05:02.593625 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:03.366317 kubelet[1895]: E0209 10:05:03.366283 1895 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.13\": Get \"https://10.200.20.16:6443/api/v1/nodes/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:05:03.594335 kubelet[1895]: E0209 10:05:03.594301 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:04.594764 kubelet[1895]: E0209 10:05:04.594734 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:05.595628 kubelet[1895]: E0209 10:05:05.595596 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:06.395036 kubelet[1895]: E0209 10:05:06.394998 1895 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:05:06.596198 kubelet[1895]: E0209 10:05:06.596168 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:07.597344 kubelet[1895]: E0209 10:05:07.597320 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:08.598661 kubelet[1895]: E0209 10:05:08.598626 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:09.511866 kubelet[1895]: E0209 10:05:09.511833 1895 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:09.559437 env[1372]: time="2024-02-09T10:05:09.559214895Z" level=info msg="StopPodSandbox for \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\"" Feb 9 10:05:09.559437 env[1372]: time="2024-02-09T10:05:09.559322029Z" level=info msg="TearDown network for sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" successfully" Feb 9 10:05:09.559437 env[1372]: time="2024-02-09T10:05:09.559366595Z" level=info msg="StopPodSandbox for \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" returns successfully" Feb 9 10:05:09.560094 env[1372]: time="2024-02-09T10:05:09.560062608Z" level=info msg="RemovePodSandbox for \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\"" Feb 9 10:05:09.560165 env[1372]: time="2024-02-09T10:05:09.560098172Z" level=info msg="Forcibly stopping sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\"" Feb 9 10:05:09.560199 env[1372]: time="2024-02-09T10:05:09.560163381Z" level=info msg="TearDown network for sandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" successfully" Feb 9 10:05:09.577403 env[1372]: time="2024-02-09T10:05:09.577362825Z" level=info msg="RemovePodSandbox \"1d51eac4ec8299566aff18b50e24cefd0760379bedf4bd1d1a8eff4da60658f8\" returns successfully" Feb 9 10:05:09.599390 kubelet[1895]: E0209 10:05:09.599357 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:10.600347 kubelet[1895]: E0209 10:05:10.600318 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:11.601828 kubelet[1895]: E0209 10:05:11.601798 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:12.602429 kubelet[1895]: E0209 10:05:12.602392 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:13.367140 kubelet[1895]: E0209 10:05:13.367100 1895 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.13\": Get \"https://10.200.20.16:6443/api/v1/nodes/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:05:13.603306 kubelet[1895]: E0209 10:05:13.603274 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:14.603784 kubelet[1895]: E0209 10:05:14.603752 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:15.605045 kubelet[1895]: E0209 10:05:15.605012 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:16.396150 kubelet[1895]: E0209 10:05:16.396115 1895 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:05:16.396150 kubelet[1895]: I0209 10:05:16.396151 1895 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 9 10:05:16.606025 kubelet[1895]: E0209 10:05:16.605996 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:17.607207 kubelet[1895]: E0209 10:05:17.607174 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:18.607959 kubelet[1895]: E0209 10:05:18.607928 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:19.608872 kubelet[1895]: E0209 10:05:19.608846 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:20.609719 kubelet[1895]: E0209 10:05:20.609678 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:21.609866 kubelet[1895]: E0209 10:05:21.609839 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:22.610495 kubelet[1895]: E0209 10:05:22.610450 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:05:23.368116 kubelet[1895]: E0209 10:05:23.368072 1895 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.13\": Get \"https://10.200.20.16:6443/api/v1/nodes/10.200.20.13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:05:23.368116 kubelet[1895]: E0209 10:05:23.368110 1895 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 9 10:05:23.611762 kubelet[1895]: E0209 10:05:23.611718 1895 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"