Feb 12 19:21:43.039280 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:21:43.039299 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:21:43.039307 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 12 19:21:43.039314 kernel: printk: bootconsole [pl11] enabled Feb 12 19:21:43.039319 kernel: efi: EFI v2.70 by EDK II Feb 12 19:21:43.039324 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 12 19:21:43.039331 kernel: random: crng init done Feb 12 19:21:43.039336 kernel: ACPI: Early table checksum verification disabled Feb 12 19:21:43.039341 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 12 19:21:43.039347 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039352 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039359 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:21:43.039364 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039370 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039376 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039382 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039388 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039395 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039400 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 12 19:21:43.039406 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:43.039412 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 12 19:21:43.039417 kernel: NUMA: Failed to initialise from firmware Feb 12 19:21:43.039423 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:21:43.039429 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 12 19:21:43.039434 kernel: Zone ranges: Feb 12 19:21:43.039440 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 12 19:21:43.039446 kernel: DMA32 empty Feb 12 19:21:43.039452 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:21:43.039458 kernel: Movable zone start for each node Feb 12 19:21:43.039463 kernel: Early memory node ranges Feb 12 19:21:43.039469 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 12 19:21:43.039475 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 12 19:21:43.039481 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 12 19:21:43.039486 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 12 19:21:43.039492 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 12 19:21:43.039497 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 12 19:21:43.039503 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 12 19:21:43.039509 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 12 19:21:43.039514 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:21:43.039521 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:21:43.039529 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 12 19:21:43.039535 kernel: psci: probing for conduit method from ACPI. Feb 12 19:21:43.039542 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:21:43.039547 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:21:43.039554 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 12 19:21:43.039560 kernel: psci: SMC Calling Convention v1.4 Feb 12 19:21:43.039566 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 12 19:21:43.039572 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 12 19:21:43.039578 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:21:43.039584 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:21:43.039591 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 19:21:43.039596 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:21:43.039603 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:21:43.039608 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:21:43.039614 kernel: CPU features: detected: Spectre-BHB Feb 12 19:21:43.039620 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:21:43.039628 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:21:43.039634 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:21:43.039640 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 12 19:21:43.039646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 12 19:21:43.039652 kernel: Policy zone: Normal Feb 12 19:21:43.039659 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:21:43.039666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:21:43.039672 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:21:43.039678 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:21:43.039684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:21:43.039691 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 12 19:21:43.039698 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 12 19:21:43.039704 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:21:43.039710 kernel: trace event string verifier disabled Feb 12 19:21:43.039716 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:21:43.039722 kernel: rcu: RCU event tracing is enabled. Feb 12 19:21:43.039729 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:21:43.039735 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:21:43.039741 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:21:43.039747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:21:43.039753 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:21:43.039761 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:21:43.039767 kernel: GICv3: 960 SPIs implemented Feb 12 19:21:43.039773 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:21:43.039779 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:21:43.039785 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:21:43.039791 kernel: GICv3: 16 PPIs implemented Feb 12 19:21:43.039796 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 12 19:21:43.039803 kernel: ITS: No ITS available, not enabling LPIs Feb 12 19:21:43.039809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:21:43.039815 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:21:43.039821 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:21:43.039827 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:21:43.039835 kernel: Console: colour dummy device 80x25 Feb 12 19:21:43.039841 kernel: printk: console [tty1] enabled Feb 12 19:21:43.039848 kernel: ACPI: Core revision 20210730 Feb 12 19:21:43.039854 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:21:43.039860 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:21:43.039867 kernel: LSM: Security Framework initializing Feb 12 19:21:43.039873 kernel: SELinux: Initializing. Feb 12 19:21:43.039879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:21:43.039886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:21:43.039894 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 12 19:21:43.039900 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 12 19:21:43.039906 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:21:43.039913 kernel: Remapping and enabling EFI services. Feb 12 19:21:43.039919 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:21:43.039925 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:21:43.039931 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 12 19:21:43.039938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:21:43.039944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:21:43.039951 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:21:43.039958 kernel: SMP: Total of 2 processors activated. Feb 12 19:21:43.039964 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:21:43.039970 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 12 19:21:43.039977 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:21:43.039983 kernel: CPU features: detected: CRC32 instructions Feb 12 19:21:43.039989 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:21:43.039996 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:21:43.040002 kernel: CPU features: detected: Privileged Access Never Feb 12 19:21:43.040009 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:21:43.040016 kernel: alternatives: patching kernel code Feb 12 19:21:43.040027 kernel: devtmpfs: initialized Feb 12 19:21:43.040035 kernel: KASLR enabled Feb 12 19:21:43.040041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:21:43.040048 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:21:43.040055 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:21:43.040061 kernel: SMBIOS 3.1.0 present. Feb 12 19:21:43.040068 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:21:43.040075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:21:43.040082 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:21:43.040089 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:21:43.040096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:21:43.040102 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:21:43.040109 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 12 19:21:43.040129 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:21:43.040136 kernel: cpuidle: using governor menu Feb 12 19:21:43.040145 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:21:43.040151 kernel: ASID allocator initialised with 32768 entries Feb 12 19:21:43.040158 kernel: ACPI: bus type PCI registered Feb 12 19:21:43.040164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:21:43.040171 kernel: Serial: AMBA PL011 UART driver Feb 12 19:21:43.040178 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:21:43.040184 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:21:43.040191 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:21:43.040198 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:21:43.040205 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:21:43.040212 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:21:43.040218 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:21:43.040225 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:21:43.040231 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:21:43.040238 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:21:43.040244 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:21:43.040251 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:21:43.040257 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:21:43.040265 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:21:43.040272 kernel: ACPI: Interpreter enabled Feb 12 19:21:43.040278 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:21:43.040285 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:21:43.040291 kernel: printk: console [ttyAMA0] enabled Feb 12 19:21:43.040298 kernel: printk: bootconsole [pl11] disabled Feb 12 19:21:43.040304 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 12 19:21:43.040311 kernel: iommu: Default domain type: Translated Feb 12 19:21:43.040317 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:21:43.040325 kernel: vgaarb: loaded Feb 12 19:21:43.040331 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:21:43.040338 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:21:43.040345 kernel: PTP clock support registered Feb 12 19:21:43.040352 kernel: Registered efivars operations Feb 12 19:21:43.040358 kernel: No ACPI PMU IRQ for CPU0 Feb 12 19:21:43.040364 kernel: No ACPI PMU IRQ for CPU1 Feb 12 19:21:43.040371 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:21:43.040377 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:21:43.040385 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:21:43.040392 kernel: pnp: PnP ACPI init Feb 12 19:21:43.040398 kernel: pnp: PnP ACPI: found 0 devices Feb 12 19:21:43.040405 kernel: NET: Registered PF_INET protocol family Feb 12 19:21:43.040412 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:21:43.040418 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:21:43.040425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:21:43.040432 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:21:43.040438 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:21:43.040446 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:21:43.040453 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:21:43.040460 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:21:43.040466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:21:43.040472 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:21:43.040479 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 12 19:21:43.040486 kernel: kvm [1]: HYP mode not available Feb 12 19:21:43.040492 kernel: Initialise system trusted keyrings Feb 12 19:21:43.040499 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:21:43.040506 kernel: Key type asymmetric registered Feb 12 19:21:43.040513 kernel: Asymmetric key parser 'x509' registered Feb 12 19:21:43.040519 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:21:43.040526 kernel: io scheduler mq-deadline registered Feb 12 19:21:43.040532 kernel: io scheduler kyber registered Feb 12 19:21:43.040539 kernel: io scheduler bfq registered Feb 12 19:21:43.040545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:21:43.040552 kernel: thunder_xcv, ver 1.0 Feb 12 19:21:43.040558 kernel: thunder_bgx, ver 1.0 Feb 12 19:21:43.040566 kernel: nicpf, ver 1.0 Feb 12 19:21:43.040572 kernel: nicvf, ver 1.0 Feb 12 19:21:43.040692 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:21:43.040753 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:21:42 UTC (1707765702) Feb 12 19:21:43.040762 kernel: efifb: probing for efifb Feb 12 19:21:43.040769 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:21:43.040775 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:21:43.040782 kernel: efifb: scrolling: redraw Feb 12 19:21:43.040792 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:21:43.040798 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:21:43.040805 kernel: fb0: EFI VGA frame buffer device Feb 12 19:21:43.040811 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 12 19:21:43.040818 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:21:43.040824 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:21:43.040831 kernel: Segment Routing with IPv6 Feb 12 19:21:43.040838 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:21:43.040844 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:21:43.040852 kernel: Key type dns_resolver registered Feb 12 19:21:43.040858 kernel: registered taskstats version 1 Feb 12 19:21:43.040865 kernel: Loading compiled-in X.509 certificates Feb 12 19:21:43.040871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:21:43.040878 kernel: Key type .fscrypt registered Feb 12 19:21:43.040885 kernel: Key type fscrypt-provisioning registered Feb 12 19:21:43.040891 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:21:43.040898 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:21:43.040904 kernel: ima: No architecture policies found Feb 12 19:21:43.040912 kernel: Freeing unused kernel memory: 34688K Feb 12 19:21:43.040919 kernel: Run /init as init process Feb 12 19:21:43.040925 kernel: with arguments: Feb 12 19:21:43.040932 kernel: /init Feb 12 19:21:43.040938 kernel: with environment: Feb 12 19:21:43.040944 kernel: HOME=/ Feb 12 19:21:43.040951 kernel: TERM=linux Feb 12 19:21:43.040957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:21:43.040966 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:21:43.040976 systemd[1]: Detected virtualization microsoft. Feb 12 19:21:43.040983 systemd[1]: Detected architecture arm64. Feb 12 19:21:43.040990 systemd[1]: Running in initrd. Feb 12 19:21:43.040997 systemd[1]: No hostname configured, using default hostname. Feb 12 19:21:43.041003 systemd[1]: Hostname set to . Feb 12 19:21:43.041011 systemd[1]: Initializing machine ID from random generator. Feb 12 19:21:43.041018 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:21:43.041026 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:21:43.041033 systemd[1]: Reached target cryptsetup.target. Feb 12 19:21:43.041039 systemd[1]: Reached target paths.target. Feb 12 19:21:43.041046 systemd[1]: Reached target slices.target. Feb 12 19:21:43.041053 systemd[1]: Reached target swap.target. Feb 12 19:21:43.041060 systemd[1]: Reached target timers.target. Feb 12 19:21:43.041068 systemd[1]: Listening on iscsid.socket. Feb 12 19:21:43.041075 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:21:43.041083 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:21:43.041090 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:21:43.041097 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:21:43.041104 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:21:43.041111 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:21:43.041129 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:21:43.041136 systemd[1]: Reached target sockets.target. Feb 12 19:21:43.041143 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:21:43.041150 systemd[1]: Finished network-cleanup.service. Feb 12 19:21:43.041158 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:21:43.041165 systemd[1]: Starting systemd-journald.service... Feb 12 19:21:43.041172 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:21:43.041179 systemd[1]: Starting systemd-resolved.service... Feb 12 19:21:43.041186 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:21:43.041197 systemd-journald[276]: Journal started Feb 12 19:21:43.041235 systemd-journald[276]: Runtime Journal (/run/log/journal/5dbf59ecad104945b5c96da418306b47) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:21:43.035172 systemd-modules-load[277]: Inserted module 'overlay' Feb 12 19:21:43.072132 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:21:43.086253 systemd[1]: Started systemd-journald.service. Feb 12 19:21:43.086311 kernel: Bridge firewalling registered Feb 12 19:21:43.086397 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 12 19:21:43.136684 kernel: audit: type=1130 audit(1707765703.092:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.136708 kernel: SCSI subsystem initialized Feb 12 19:21:43.136717 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:21:43.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.089990 systemd-resolved[278]: Positive Trust Anchors: Feb 12 19:21:43.177046 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:21:43.177067 kernel: audit: type=1130 audit(1707765703.141:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.177077 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:21:43.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.089998 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:21:43.238847 kernel: audit: type=1130 audit(1707765703.181:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.238872 kernel: audit: type=1130 audit(1707765703.210:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.090025 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:21:43.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.092076 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 12 19:21:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.114679 systemd[1]: Started systemd-resolved.service. Feb 12 19:21:43.325737 kernel: audit: type=1130 audit(1707765703.263:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.325761 kernel: audit: type=1130 audit(1707765703.297:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.141509 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:21:43.182090 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:21:43.187070 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 12 19:21:43.211056 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:21:43.263423 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:21:43.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.297575 systemd[1]: Reached target nss-lookup.target. Feb 12 19:21:43.407416 kernel: audit: type=1130 audit(1707765703.373:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.331164 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:21:43.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.339900 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:21:43.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.348963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:21:43.468407 kernel: audit: type=1130 audit(1707765703.402:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.468433 kernel: audit: type=1130 audit(1707765703.431:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.367095 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:21:43.389836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:21:43.481319 dracut-cmdline[298]: dracut-dracut-053 Feb 12 19:21:43.403229 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:21:43.432912 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:21:43.500472 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:21:43.590139 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:21:43.602143 kernel: iscsi: registered transport (tcp) Feb 12 19:21:43.623257 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:21:43.623292 kernel: QLogic iSCSI HBA Driver Feb 12 19:21:43.652720 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:21:43.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:43.659281 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:21:43.713142 kernel: raid6: neonx8 gen() 13822 MB/s Feb 12 19:21:43.733127 kernel: raid6: neonx8 xor() 10831 MB/s Feb 12 19:21:43.754131 kernel: raid6: neonx4 gen() 13583 MB/s Feb 12 19:21:43.776128 kernel: raid6: neonx4 xor() 11308 MB/s Feb 12 19:21:43.797130 kernel: raid6: neonx2 gen() 13105 MB/s Feb 12 19:21:43.820127 kernel: raid6: neonx2 xor() 10379 MB/s Feb 12 19:21:43.841129 kernel: raid6: neonx1 gen() 10511 MB/s Feb 12 19:21:43.863125 kernel: raid6: neonx1 xor() 8808 MB/s Feb 12 19:21:43.884126 kernel: raid6: int64x8 gen() 6297 MB/s Feb 12 19:21:43.904125 kernel: raid6: int64x8 xor() 3550 MB/s Feb 12 19:21:43.925130 kernel: raid6: int64x4 gen() 7245 MB/s Feb 12 19:21:43.945129 kernel: raid6: int64x4 xor() 3856 MB/s Feb 12 19:21:43.965124 kernel: raid6: int64x2 gen() 6155 MB/s Feb 12 19:21:43.986125 kernel: raid6: int64x2 xor() 3322 MB/s Feb 12 19:21:44.007125 kernel: raid6: int64x1 gen() 5050 MB/s Feb 12 19:21:44.031533 kernel: raid6: int64x1 xor() 2647 MB/s Feb 12 19:21:44.031543 kernel: raid6: using algorithm neonx8 gen() 13822 MB/s Feb 12 19:21:44.031551 kernel: raid6: .... xor() 10831 MB/s, rmw enabled Feb 12 19:21:44.036985 kernel: raid6: using neon recovery algorithm Feb 12 19:21:44.058832 kernel: xor: measuring software checksum speed Feb 12 19:21:44.058844 kernel: 8regs : 17282 MB/sec Feb 12 19:21:44.067637 kernel: 32regs : 20749 MB/sec Feb 12 19:21:44.067647 kernel: arm64_neon : 27911 MB/sec Feb 12 19:21:44.067655 kernel: xor: using function: arm64_neon (27911 MB/sec) Feb 12 19:21:44.128133 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:21:44.137306 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:21:44.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:44.145000 audit: BPF prog-id=7 op=LOAD Feb 12 19:21:44.146000 audit: BPF prog-id=8 op=LOAD Feb 12 19:21:44.146722 systemd[1]: Starting systemd-udevd.service... Feb 12 19:21:44.161742 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 12 19:21:44.167859 systemd[1]: Started systemd-udevd.service. Feb 12 19:21:44.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:44.178014 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:21:44.196780 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Feb 12 19:21:44.237724 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:21:44.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:44.244165 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:21:44.285084 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:21:44.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:44.340172 kernel: hv_vmbus: Vmbus version:5.3 Feb 12 19:21:44.362216 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:21:44.362263 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:21:44.362273 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:21:44.362281 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:21:44.367212 kernel: scsi host1: storvsc_host_t Feb 12 19:21:44.378341 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 12 19:21:44.388573 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:21:44.393137 kernel: scsi host0: storvsc_host_t Feb 12 19:21:44.393309 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 12 19:21:44.411452 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:21:44.419103 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:21:44.438356 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:21:44.438544 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:21:44.440151 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:21:44.455282 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:21:44.455458 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:21:44.455543 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:21:44.459622 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:21:44.459810 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:21:44.478090 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:44.478155 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:21:44.503148 kernel: hv_netvsc 002248b6-7ebe-0022-48b6-7ebe002248b6 eth0: VF slot 1 added Feb 12 19:21:44.512146 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:21:44.520130 kernel: hv_pci 958af6e2-c955-4ab6-acea-2ac844234f2d: PCI VMBus probing: Using version 0x10004 Feb 12 19:21:44.536529 kernel: hv_pci 958af6e2-c955-4ab6-acea-2ac844234f2d: PCI host bridge to bus c955:00 Feb 12 19:21:44.536685 kernel: pci_bus c955:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 12 19:21:44.543362 kernel: pci_bus c955:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:21:44.550565 kernel: pci c955:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 12 19:21:44.563977 kernel: pci c955:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:21:44.585547 kernel: pci c955:00:02.0: enabling Extended Tags Feb 12 19:21:44.605230 kernel: pci c955:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c955:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 12 19:21:44.617370 kernel: pci_bus c955:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:21:44.617541 kernel: pci c955:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:21:44.659143 kernel: mlx5_core c955:00:02.0: firmware version: 16.30.1284 Feb 12 19:21:44.817262 kernel: mlx5_core c955:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 12 19:21:44.877383 kernel: hv_netvsc 002248b6-7ebe-0022-48b6-7ebe002248b6 eth0: VF registering: eth1 Feb 12 19:21:44.877547 kernel: mlx5_core c955:00:02.0 eth1: joined to eth0 Feb 12 19:21:44.889136 kernel: mlx5_core c955:00:02.0 enP51541s1: renamed from eth1 Feb 12 19:21:44.957535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:21:44.989138 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (539) Feb 12 19:21:45.001598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:21:45.203479 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:21:45.209972 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:21:45.238017 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:21:45.244502 systemd[1]: Starting disk-uuid.service... Feb 12 19:21:45.267135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:46.283039 disk-uuid[604]: The operation has completed successfully. Feb 12 19:21:46.288498 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:46.345728 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:21:46.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.345830 systemd[1]: Finished disk-uuid.service. Feb 12 19:21:46.351032 systemd[1]: Starting verity-setup.service... Feb 12 19:21:46.411140 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:21:46.613057 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:21:46.619418 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:21:46.631244 systemd[1]: Finished verity-setup.service. Feb 12 19:21:46.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.688135 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:21:46.688581 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:21:46.692937 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:21:46.693693 systemd[1]: Starting ignition-setup.service... Feb 12 19:21:46.701507 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:21:46.739808 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:46.739838 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:46.744871 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:46.786777 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:21:46.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.797000 audit: BPF prog-id=9 op=LOAD Feb 12 19:21:46.797756 systemd[1]: Starting systemd-networkd.service... Feb 12 19:21:46.825384 systemd-networkd[868]: lo: Link UP Feb 12 19:21:46.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.825391 systemd-networkd[868]: lo: Gained carrier Feb 12 19:21:46.825768 systemd-networkd[868]: Enumeration completed Feb 12 19:21:46.825856 systemd[1]: Started systemd-networkd.service. Feb 12 19:21:46.831005 systemd[1]: Reached target network.target. Feb 12 19:21:46.839859 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:21:46.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.844706 systemd[1]: Starting iscsiuio.service... Feb 12 19:21:46.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.893775 iscsid[876]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:21:46.893775 iscsid[876]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:21:46.893775 iscsid[876]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:21:46.893775 iscsid[876]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:21:46.893775 iscsid[876]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:21:46.893775 iscsid[876]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:21:46.893775 iscsid[876]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:21:47.005380 kernel: mlx5_core c955:00:02.0 enP51541s1: Link up Feb 12 19:21:47.005554 kernel: hv_netvsc 002248b6-7ebe-0022-48b6-7ebe002248b6 eth0: Data path switched to VF: enP51541s1 Feb 12 19:21:46.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.862433 systemd[1]: Started iscsiuio.service. Feb 12 19:21:47.021422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:21:46.872840 systemd[1]: Starting iscsid.service... Feb 12 19:21:46.884832 systemd[1]: Started iscsid.service. Feb 12 19:21:46.890483 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:21:46.938097 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:21:46.953732 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:21:47.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.963593 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:21:47.083226 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 12 19:21:47.083249 kernel: audit: type=1130 audit(1707765707.050:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.975699 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:21:46.988708 systemd[1]: Reached target remote-fs.target. Feb 12 19:21:47.016205 systemd-networkd[868]: enP51541s1: Link UP Feb 12 19:21:47.016275 systemd-networkd[868]: eth0: Link UP Feb 12 19:21:47.016361 systemd-networkd[868]: eth0: Gained carrier Feb 12 19:21:47.021624 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:21:47.040359 systemd-networkd[868]: enP51541s1: Gained carrier Feb 12 19:21:47.041032 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:21:47.056203 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:21:47.133353 systemd[1]: Finished ignition-setup.service. Feb 12 19:21:47.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:47.139034 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:21:47.170274 kernel: audit: type=1130 audit(1707765707.137:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:49.032296 systemd-networkd[868]: eth0: Gained IPv6LL Feb 12 19:21:50.691754 ignition[895]: Ignition 2.14.0 Feb 12 19:21:50.691768 ignition[895]: Stage: fetch-offline Feb 12 19:21:50.691833 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:50.691856 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:50.817144 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:50.817314 ignition[895]: parsed url from cmdline: "" Feb 12 19:21:50.817319 ignition[895]: no config URL provided Feb 12 19:21:50.817325 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:21:50.865204 kernel: audit: type=1130 audit(1707765710.835:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.825168 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:21:50.817334 ignition[895]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:21:50.836377 systemd[1]: Starting ignition-fetch.service... Feb 12 19:21:50.817339 ignition[895]: failed to fetch config: resource requires networking Feb 12 19:21:50.817605 ignition[895]: Ignition finished successfully Feb 12 19:21:50.866434 ignition[901]: Ignition 2.14.0 Feb 12 19:21:50.866441 ignition[901]: Stage: fetch Feb 12 19:21:50.866548 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:50.866565 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:50.869183 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:50.869305 ignition[901]: parsed url from cmdline: "" Feb 12 19:21:50.912304 unknown[901]: fetched base config from "system" Feb 12 19:21:50.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.869309 ignition[901]: no config URL provided Feb 12 19:21:50.951982 kernel: audit: type=1130 audit(1707765710.921:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.912312 unknown[901]: fetched base config from "system" Feb 12 19:21:50.869313 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:21:50.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.912317 unknown[901]: fetched user config from "azure" Feb 12 19:21:50.998354 kernel: audit: type=1130 audit(1707765710.976:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.869321 ignition[901]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:21:50.915312 systemd[1]: Finished ignition-fetch.service. Feb 12 19:21:50.869366 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:21:50.922445 systemd[1]: Starting ignition-kargs.service... Feb 12 19:21:50.888839 ignition[901]: GET result: OK Feb 12 19:21:50.970987 systemd[1]: Finished ignition-kargs.service. Feb 12 19:21:50.888897 ignition[901]: config has been read from IMDS userdata Feb 12 19:21:51.016382 systemd[1]: Starting ignition-disks.service... Feb 12 19:21:51.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.888930 ignition[901]: parsing config with SHA512: fcd22a137571239b8254559ad553368d02b9f67b1e8b67142bb2cb73d1db87256ed284c79ed99dfaf8b1e8f3a6848b5b4075e75ee7302135c237bf04909c248e Feb 12 19:21:51.043304 systemd[1]: Finished ignition-disks.service. Feb 12 19:21:50.912797 ignition[901]: fetch: fetch complete Feb 12 19:21:51.096031 kernel: audit: type=1130 audit(1707765711.053:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.080741 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:21:50.912802 ignition[901]: fetch: fetch passed Feb 12 19:21:51.092131 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:21:50.912845 ignition[901]: Ignition finished successfully Feb 12 19:21:51.101171 systemd[1]: Reached target local-fs.target. Feb 12 19:21:50.959667 ignition[907]: Ignition 2.14.0 Feb 12 19:21:51.123211 systemd[1]: Reached target sysinit.target. Feb 12 19:21:50.959674 ignition[907]: Stage: kargs Feb 12 19:21:51.138286 systemd[1]: Reached target basic.target. Feb 12 19:21:50.959776 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:51.147715 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:21:50.959798 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:50.962433 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:50.968973 ignition[907]: kargs: kargs passed Feb 12 19:21:50.969041 ignition[907]: Ignition finished successfully Feb 12 19:21:51.035076 ignition[913]: Ignition 2.14.0 Feb 12 19:21:51.035083 ignition[913]: Stage: disks Feb 12 19:21:51.035211 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:51.035233 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:51.038075 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:51.040273 ignition[913]: disks: disks passed Feb 12 19:21:51.040332 ignition[913]: Ignition finished successfully Feb 12 19:21:51.271797 systemd-fsck[921]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 12 19:21:51.282037 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:21:51.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.311060 systemd[1]: Mounting sysroot.mount... Feb 12 19:21:51.320881 kernel: audit: type=1130 audit(1707765711.286:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.335138 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:21:51.335461 systemd[1]: Mounted sysroot.mount. Feb 12 19:21:51.339719 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:21:51.382614 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:21:51.387817 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:21:51.396771 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:21:51.396805 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:21:51.403266 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:21:51.450820 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:21:51.456457 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:21:51.481380 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (932) Feb 12 19:21:51.489054 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:21:51.501944 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:51.501966 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:51.507072 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:51.511090 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:21:51.524810 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:21:51.534579 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:21:51.544535 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:21:52.119925 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:21:52.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.145403 systemd[1]: Starting ignition-mount.service... Feb 12 19:21:52.155507 kernel: audit: type=1130 audit(1707765712.124:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.155390 systemd[1]: Starting sysroot-boot.service... Feb 12 19:21:52.161020 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:21:52.161296 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:21:52.180429 ignition[998]: INFO : Ignition 2.14.0 Feb 12 19:21:52.180429 ignition[998]: INFO : Stage: mount Feb 12 19:21:52.192431 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:52.192431 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:52.192431 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:52.192431 ignition[998]: INFO : mount: mount passed Feb 12 19:21:52.192431 ignition[998]: INFO : Ignition finished successfully Feb 12 19:21:52.254405 kernel: audit: type=1130 audit(1707765712.198:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.191914 systemd[1]: Finished ignition-mount.service. Feb 12 19:21:52.260374 systemd[1]: Finished sysroot-boot.service. Feb 12 19:21:52.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.287171 kernel: audit: type=1130 audit(1707765712.264:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.919171 coreos-metadata[931]: Feb 12 19:21:52.919 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:21:52.929246 coreos-metadata[931]: Feb 12 19:21:52.929 INFO Fetch successful Feb 12 19:21:52.961921 coreos-metadata[931]: Feb 12 19:21:52.961 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:21:52.975008 coreos-metadata[931]: Feb 12 19:21:52.974 INFO Fetch successful Feb 12 19:21:52.981175 coreos-metadata[931]: Feb 12 19:21:52.980 INFO wrote hostname ci-3510.3.2-a-c97c98db58 to /sysroot/etc/hostname Feb 12 19:21:52.990768 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:21:52.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:52.997181 systemd[1]: Starting ignition-files.service... Feb 12 19:21:53.028078 kernel: audit: type=1130 audit(1707765712.996:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:53.027336 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:21:53.051419 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) Feb 12 19:21:53.051449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:53.051458 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:53.060849 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:53.065784 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:21:53.079228 ignition[1029]: INFO : Ignition 2.14.0 Feb 12 19:21:53.079228 ignition[1029]: INFO : Stage: files Feb 12 19:21:53.091814 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:53.091814 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:53.091814 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:53.091814 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:21:53.091814 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:21:53.091814 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:21:53.173973 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:21:53.181856 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:21:53.194530 unknown[1029]: wrote ssh authorized keys file for user: core Feb 12 19:21:53.200068 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:21:53.200068 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:21:53.200068 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:21:53.665556 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:21:53.940106 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:21:53.940106 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:21:53.968094 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:21:53.968094 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:21:54.315153 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:21:54.450392 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:21:54.467247 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:21:54.467247 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:21:54.467247 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:21:54.581585 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:21:54.859805 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 12 19:21:54.876782 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:21:54.876782 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:21:54.876782 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:21:54.920373 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:21:55.547187 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:21:55.565479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:21:55.710096 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1034) Feb 12 19:21:55.710132 kernel: audit: type=1130 audit(1707765715.638:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4087280701" Feb 12 19:21:55.710186 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4087280701": device or resource busy Feb 12 19:21:55.710186 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4087280701", trying btrfs: device or resource busy Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4087280701" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4087280701" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem4087280701" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem4087280701" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2239277341" Feb 12 19:21:55.710186 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2239277341": device or resource busy Feb 12 19:21:55.710186 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2239277341", trying btrfs: device or resource busy Feb 12 19:21:55.710186 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2239277341" Feb 12 19:21:56.004371 kernel: audit: type=1130 audit(1707765715.715:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.004398 kernel: audit: type=1131 audit(1707765715.734:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.004413 kernel: audit: type=1130 audit(1707765715.791:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.004423 kernel: audit: type=1130 audit(1707765715.870:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.004432 kernel: audit: type=1131 audit(1707765715.870:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.608758 systemd[1]: mnt-oem4087280701.mount: Deactivated successfully. Feb 12 19:21:56.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.014719 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2239277341" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem2239277341" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem2239277341" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 12 19:21:56.014719 ignition[1029]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:21:56.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.626903 systemd[1]: Finished ignition-files.service. Feb 12 19:21:56.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.288730 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:21:56.288730 ignition[1029]: INFO : files: files passed Feb 12 19:21:56.288730 ignition[1029]: INFO : Ignition finished successfully Feb 12 19:21:56.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.390132 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:21:56.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.639418 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:21:56.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.670495 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:21:56.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.672967 systemd[1]: Starting ignition-quench.service... Feb 12 19:21:56.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.687435 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:21:55.687619 systemd[1]: Finished ignition-quench.service. Feb 12 19:21:56.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.772467 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:21:55.821683 systemd[1]: Reached target ignition-complete.target. Feb 12 19:21:56.473655 ignition[1067]: INFO : Ignition 2.14.0 Feb 12 19:21:56.473655 ignition[1067]: INFO : Stage: umount Feb 12 19:21:56.473655 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:56.473655 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:56.473655 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:56.473655 ignition[1067]: INFO : umount: umount passed Feb 12 19:21:56.473655 ignition[1067]: INFO : Ignition finished successfully Feb 12 19:21:56.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.839293 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:21:55.863671 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:21:55.863774 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:21:56.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.870912 systemd[1]: Reached target initrd-fs.target. Feb 12 19:21:56.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.586000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:21:55.921531 systemd[1]: Reached target initrd.target. Feb 12 19:21:55.938033 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:21:56.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:55.938885 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:21:56.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.004680 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:21:56.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.010194 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:21:56.030903 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:21:56.043897 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:21:56.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.058093 systemd[1]: Stopped target timers.target. Feb 12 19:21:56.072858 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:21:56.072924 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:21:56.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.084148 systemd[1]: Stopped target initrd.target. Feb 12 19:21:56.713093 kernel: hv_netvsc 002248b6-7ebe-0022-48b6-7ebe002248b6 eth0: Data path switched from VF: enP51541s1 Feb 12 19:21:56.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.095256 systemd[1]: Stopped target basic.target. Feb 12 19:21:56.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.106055 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:21:56.118991 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:21:56.131684 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:21:56.148516 systemd[1]: Stopped target remote-fs.target. Feb 12 19:21:56.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.164751 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:21:56.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.177135 systemd[1]: Stopped target sysinit.target. Feb 12 19:21:56.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.189017 systemd[1]: Stopped target local-fs.target. Feb 12 19:21:56.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.204783 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:21:56.221015 systemd[1]: Stopped target swap.target. Feb 12 19:21:56.233329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:21:56.233392 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:21:56.245884 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:21:56.258616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:21:56.258669 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:21:56.271259 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:21:56.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.271296 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:21:56.284546 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:21:56.284582 systemd[1]: Stopped ignition-files.service. Feb 12 19:21:56.293097 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:21:56.293154 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:21:56.307081 systemd[1]: Stopping ignition-mount.service... Feb 12 19:21:56.320219 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:21:56.324042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:21:56.324152 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:21:56.350527 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:21:56.350592 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:21:56.366618 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:21:56.366727 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:21:56.388205 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:21:56.388639 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:21:56.388729 systemd[1]: Stopped ignition-mount.service. Feb 12 19:21:56.395320 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:21:56.395372 systemd[1]: Stopped ignition-disks.service. Feb 12 19:21:56.410262 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:21:56.410309 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:21:56.420902 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:21:56.420940 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:21:56.436359 systemd[1]: Stopped target network.target. Feb 12 19:21:56.444783 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:21:56.444839 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:21:56.454758 systemd[1]: Stopped target paths.target. Feb 12 19:21:56.462831 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:21:56.473548 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:21:56.478774 systemd[1]: Stopped target slices.target. Feb 12 19:21:56.487864 systemd[1]: Stopped target sockets.target. Feb 12 19:21:56.495867 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:21:56.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.495900 systemd[1]: Closed iscsid.socket. Feb 12 19:21:56.506756 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:21:57.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:56.506781 systemd[1]: Closed iscsiuio.socket. Feb 12 19:21:56.525177 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:21:56.525222 systemd[1]: Stopped ignition-setup.service. Feb 12 19:21:56.537831 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:21:56.546860 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:21:56.553167 systemd-networkd[868]: eth0: DHCPv6 lease lost Feb 12 19:21:57.037000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:21:56.563909 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:21:56.564006 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:21:56.577178 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:21:56.577268 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:21:57.067066 iscsid[876]: iscsid shutting down. Feb 12 19:21:57.067145 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 12 19:21:56.586283 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:21:56.586317 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:21:56.594876 systemd[1]: Stopping network-cleanup.service... Feb 12 19:21:56.602254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:21:56.602310 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:21:56.607716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:21:56.607761 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:21:56.631894 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:21:56.631950 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:21:56.636915 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:21:56.648360 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:21:56.650948 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:21:56.651096 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:21:56.663043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:21:56.663086 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:21:56.672479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:21:56.672515 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:21:56.681168 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:21:56.681217 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:21:56.691145 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:21:56.691184 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:21:56.709337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:21:56.709386 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:21:56.718661 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:21:56.737014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:21:56.737464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:21:56.751662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:21:56.751724 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:21:56.756472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:21:56.756512 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:21:56.766925 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:21:56.767414 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:21:56.767516 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:21:56.809594 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:21:56.809696 systemd[1]: Stopped network-cleanup.service. Feb 12 19:21:56.968374 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:21:56.974149 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:21:56.983250 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:21:56.992711 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:21:56.992772 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:21:57.001402 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:21:57.021967 systemd[1]: Switching root. Feb 12 19:21:57.068199 systemd-journald[276]: Journal stopped Feb 12 19:22:10.426961 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:22:10.426981 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:22:10.426992 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:22:10.427002 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:22:10.427009 kernel: SELinux: policy capability open_perms=1 Feb 12 19:22:10.427017 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:22:10.427026 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:22:10.427034 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:22:10.427042 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:22:10.427050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:22:10.427060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:22:10.427068 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 12 19:22:10.427077 kernel: audit: type=1403 audit(1707765719.520:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:22:10.427086 systemd[1]: Successfully loaded SELinux policy in 276.217ms. Feb 12 19:22:10.427097 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.214ms. Feb 12 19:22:10.427109 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:22:10.427131 systemd[1]: Detected virtualization microsoft. Feb 12 19:22:10.427142 systemd[1]: Detected architecture arm64. Feb 12 19:22:10.427150 systemd[1]: Detected first boot. Feb 12 19:22:10.427160 systemd[1]: Hostname set to . Feb 12 19:22:10.427169 systemd[1]: Initializing machine ID from random generator. Feb 12 19:22:10.427179 kernel: audit: type=1400 audit(1707765720.305:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:10.427191 kernel: audit: type=1400 audit(1707765720.308:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:10.427200 kernel: audit: type=1334 audit(1707765720.323:83): prog-id=10 op=LOAD Feb 12 19:22:10.427208 kernel: audit: type=1334 audit(1707765720.323:84): prog-id=10 op=UNLOAD Feb 12 19:22:10.427217 kernel: audit: type=1334 audit(1707765720.340:85): prog-id=11 op=LOAD Feb 12 19:22:10.427225 kernel: audit: type=1334 audit(1707765720.340:86): prog-id=11 op=UNLOAD Feb 12 19:22:10.427233 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:22:10.427243 kernel: audit: type=1400 audit(1707765721.906:87): avc: denied { associate } for pid=1100 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:22:10.427253 kernel: audit: type=1300 audit(1707765721.906:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:10.427262 kernel: audit: type=1327 audit(1707765721.906:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:22:10.427271 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:22:10.427281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:22:10.427290 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:22:10.427300 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:22:10.427310 kernel: kauditd_printk_skb: 6 callbacks suppressed Feb 12 19:22:10.427318 kernel: audit: type=1334 audit(1707765729.580:89): prog-id=12 op=LOAD Feb 12 19:22:10.427326 kernel: audit: type=1334 audit(1707765729.580:90): prog-id=3 op=UNLOAD Feb 12 19:22:10.427335 kernel: audit: type=1334 audit(1707765729.586:91): prog-id=13 op=LOAD Feb 12 19:22:10.427343 kernel: audit: type=1334 audit(1707765729.592:92): prog-id=14 op=LOAD Feb 12 19:22:10.427354 kernel: audit: type=1334 audit(1707765729.592:93): prog-id=4 op=UNLOAD Feb 12 19:22:10.427362 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:22:10.427372 kernel: audit: type=1334 audit(1707765729.592:94): prog-id=5 op=UNLOAD Feb 12 19:22:10.427381 systemd[1]: Stopped iscsiuio.service. Feb 12 19:22:10.427392 kernel: audit: type=1334 audit(1707765729.597:95): prog-id=15 op=LOAD Feb 12 19:22:10.427401 kernel: audit: type=1334 audit(1707765729.597:96): prog-id=12 op=UNLOAD Feb 12 19:22:10.427409 kernel: audit: type=1334 audit(1707765729.603:97): prog-id=16 op=LOAD Feb 12 19:22:10.427418 kernel: audit: type=1334 audit(1707765729.610:98): prog-id=17 op=LOAD Feb 12 19:22:10.427426 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:22:10.427436 systemd[1]: Stopped iscsid.service. Feb 12 19:22:10.427445 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:22:10.427455 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:22:10.427465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:22:10.427474 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:22:10.427484 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:22:10.427493 systemd[1]: Created slice system-getty.slice. Feb 12 19:22:10.427502 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:22:10.427511 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:22:10.427521 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:22:10.427530 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:22:10.427540 systemd[1]: Created slice user.slice. Feb 12 19:22:10.427550 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:22:10.427559 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:22:10.427569 systemd[1]: Set up automount boot.automount. Feb 12 19:22:10.427578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:22:10.427588 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:22:10.427597 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:22:10.427607 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:22:10.427616 systemd[1]: Reached target integritysetup.target. Feb 12 19:22:10.427625 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:22:10.427635 systemd[1]: Reached target remote-fs.target. Feb 12 19:22:10.427644 systemd[1]: Reached target slices.target. Feb 12 19:22:10.427653 systemd[1]: Reached target swap.target. Feb 12 19:22:10.427662 systemd[1]: Reached target torcx.target. Feb 12 19:22:10.427673 systemd[1]: Reached target veritysetup.target. Feb 12 19:22:10.427682 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:22:10.427691 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:22:10.427701 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:22:10.427710 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:22:10.427719 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:22:10.427729 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:22:10.427739 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:22:10.427748 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:22:10.427758 systemd[1]: Mounting media.mount... Feb 12 19:22:10.427768 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:22:10.427777 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:22:10.427786 systemd[1]: Mounting tmp.mount... Feb 12 19:22:10.427796 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:22:10.427805 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:22:10.427815 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:22:10.427825 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:22:10.427835 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:22:10.427844 systemd[1]: Starting modprobe@drm.service... Feb 12 19:22:10.427853 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:22:10.427862 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:22:10.427872 systemd[1]: Starting modprobe@loop.service... Feb 12 19:22:10.427881 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:22:10.427891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:22:10.427900 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:22:10.427911 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:22:10.427920 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:22:10.427929 systemd[1]: Stopped systemd-journald.service. Feb 12 19:22:10.427938 kernel: loop: module loaded Feb 12 19:22:10.427947 systemd[1]: systemd-journald.service: Consumed 3.250s CPU time. Feb 12 19:22:10.427956 systemd[1]: Starting systemd-journald.service... Feb 12 19:22:10.427967 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:22:10.427976 kernel: fuse: init (API version 7.34) Feb 12 19:22:10.427985 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:22:10.427996 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:22:10.428005 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:22:10.428015 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:22:10.428024 systemd[1]: Stopped verity-setup.service. Feb 12 19:22:10.428033 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:22:10.428043 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:22:10.428052 systemd[1]: Mounted media.mount. Feb 12 19:22:10.428061 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:22:10.428070 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:22:10.428081 systemd[1]: Mounted tmp.mount. Feb 12 19:22:10.428093 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:22:10.428104 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:22:10.428125 systemd-journald[1206]: Journal started Feb 12 19:22:10.428162 systemd-journald[1206]: Runtime Journal (/run/log/journal/27801aa1d2fa4036bca2973676b23096) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:21:59.520000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:22:00.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:00.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:00.323000 audit: BPF prog-id=10 op=LOAD Feb 12 19:22:00.323000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:22:00.340000 audit: BPF prog-id=11 op=LOAD Feb 12 19:22:00.340000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:22:01.906000 audit[1100]: AVC avc: denied { associate } for pid=1100 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:22:01.906000 audit[1100]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:01.906000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:22:01.915000 audit[1100]: AVC avc: denied { associate } for pid=1100 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:22:01.915000 audit[1100]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228c5 a2=1ed a3=0 items=2 ppid=1083 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:01.915000 audit: CWD cwd="/" Feb 12 19:22:01.915000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:01.915000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:01.915000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:22:09.580000 audit: BPF prog-id=12 op=LOAD Feb 12 19:22:09.580000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:22:09.586000 audit: BPF prog-id=13 op=LOAD Feb 12 19:22:09.592000 audit: BPF prog-id=14 op=LOAD Feb 12 19:22:09.592000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:22:09.592000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:22:09.597000 audit: BPF prog-id=15 op=LOAD Feb 12 19:22:09.597000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:22:09.603000 audit: BPF prog-id=16 op=LOAD Feb 12 19:22:09.610000 audit: BPF prog-id=17 op=LOAD Feb 12 19:22:09.610000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:22:09.610000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:22:09.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.639000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:22:09.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.267000 audit: BPF prog-id=18 op=LOAD Feb 12 19:22:10.267000 audit: BPF prog-id=19 op=LOAD Feb 12 19:22:10.267000 audit: BPF prog-id=20 op=LOAD Feb 12 19:22:10.267000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:22:10.267000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:22:10.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.424000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:22:10.424000 audit[1206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffef691680 a2=4000 a3=1 items=0 ppid=1 pid=1206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:10.424000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:22:01.739247 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:22:09.579566 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:22:01.739630 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:22:09.611032 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:22:10.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:01.739649 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:22:09.611689 systemd[1]: systemd-journald.service: Consumed 3.250s CPU time. Feb 12 19:22:01.739689 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:22:01.739699 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:22:01.739738 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:22:01.739749 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:22:01.739951 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:22:01.739984 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:22:01.739995 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:22:01.755465 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:22:01.755498 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:22:01.755521 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:22:01.755536 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:22:01.755553 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:22:01.755566 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:22:08.451565 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:22:08.451812 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:22:08.451904 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:22:08.452057 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:22:08.452104 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:22:08.452175 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2024-02-12T19:22:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:22:10.438966 systemd[1]: Started systemd-journald.service. Feb 12 19:22:10.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.439939 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:22:10.440089 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:22:10.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.445062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:22:10.445265 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:22:10.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.450159 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:22:10.450280 systemd[1]: Finished modprobe@drm.service. Feb 12 19:22:10.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.455086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:22:10.455218 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:22:10.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.460480 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:22:10.460595 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:22:10.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.465386 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:22:10.465498 systemd[1]: Finished modprobe@loop.service. Feb 12 19:22:10.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.470615 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:22:10.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.476244 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:22:10.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.481537 systemd[1]: Reached target network-pre.target. Feb 12 19:22:10.487605 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:22:10.493346 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:22:10.500718 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:22:10.518676 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:22:10.524291 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:22:10.528804 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:22:10.529917 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:22:10.534741 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:22:10.535939 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:22:10.541894 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:22:10.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.547480 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:22:10.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.552824 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:22:10.557874 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:22:10.564540 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:22:10.569708 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:22:10.577293 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:22:10.596902 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:22:10.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.602092 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:22:10.635143 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:22:10.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.660390 systemd-journald[1206]: Time spent on flushing to /var/log/journal/27801aa1d2fa4036bca2973676b23096 is 13.641ms for 1114 entries. Feb 12 19:22:10.660390 systemd-journald[1206]: System Journal (/var/log/journal/27801aa1d2fa4036bca2973676b23096) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:22:10.739091 systemd-journald[1206]: Received client request to flush runtime journal. Feb 12 19:22:10.740085 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:22:10.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.284276 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:22:11.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.290494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:22:11.576926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:22:11.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.972473 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:22:11.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.977000 audit: BPF prog-id=21 op=LOAD Feb 12 19:22:11.978000 audit: BPF prog-id=22 op=LOAD Feb 12 19:22:11.978000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:22:11.978000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:22:11.978774 systemd[1]: Starting systemd-udevd.service... Feb 12 19:22:11.996876 systemd-udevd[1225]: Using default interface naming scheme 'v252'. Feb 12 19:22:12.246954 systemd[1]: Started systemd-udevd.service. Feb 12 19:22:12.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.257000 audit: BPF prog-id=23 op=LOAD Feb 12 19:22:12.258638 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:12.279389 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:22:12.318847 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:22:12.317000 audit: BPF prog-id=24 op=LOAD Feb 12 19:22:12.318000 audit: BPF prog-id=25 op=LOAD Feb 12 19:22:12.318000 audit: BPF prog-id=26 op=LOAD Feb 12 19:22:12.342000 audit[1243]: AVC avc: denied { confidentiality } for pid=1243 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:22:12.366176 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:22:12.366277 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:22:12.366304 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:22:12.366325 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 12 19:22:12.374544 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:22:12.382225 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:22:12.389469 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:22:12.398289 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:22:12.398265 systemd[1]: Started systemd-userdbd.service. Feb 12 19:22:12.413927 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:22:12.414103 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:22:12.342000 audit[1243]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaeb3e6610 a1=aa2c a2=ffffbe0524b0 a3=aaaaeb33e010 items=12 ppid=1225 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:12.342000 audit: CWD cwd="/" Feb 12 19:22:12.342000 audit: PATH item=0 name=(null) inode=6652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=1 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=2 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=3 name=(null) inode=10224 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=4 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=5 name=(null) inode=10225 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=6 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=7 name=(null) inode=10226 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=8 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=9 name=(null) inode=10227 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=10 name=(null) inode=10223 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PATH item=11 name=(null) inode=10228 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:12.342000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:22:12.419958 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:22:12.420238 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:22:12.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.426137 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:22:12.427155 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:22:13.299061 systemd-networkd[1246]: lo: Link UP Feb 12 19:22:13.299074 systemd-networkd[1246]: lo: Gained carrier Feb 12 19:22:13.299463 systemd-networkd[1246]: Enumeration completed Feb 12 19:22:13.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:13.299570 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:13.305540 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:22:13.329686 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:13.365788 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1231) Feb 12 19:22:13.381813 kernel: mlx5_core c955:00:02.0 enP51541s1: Link up Feb 12 19:22:13.391591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:22:13.399149 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:22:13.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:13.412767 kernel: hv_netvsc 002248b6-7ebe-0022-48b6-7ebe002248b6 eth0: Data path switched to VF: enP51541s1 Feb 12 19:22:13.413569 systemd-networkd[1246]: enP51541s1: Link UP Feb 12 19:22:13.413929 systemd-networkd[1246]: eth0: Link UP Feb 12 19:22:13.413940 systemd-networkd[1246]: eth0: Gained carrier Feb 12 19:22:13.414259 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:22:13.421012 systemd-networkd[1246]: enP51541s1: Gained carrier Feb 12 19:22:13.424849 systemd-networkd[1246]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:22:13.783619 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:13.819645 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:22:13.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:13.825185 systemd[1]: Reached target cryptsetup.target. Feb 12 19:22:13.831485 systemd[1]: Starting lvm2-activation.service... Feb 12 19:22:13.835602 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:13.854668 systemd[1]: Finished lvm2-activation.service. Feb 12 19:22:13.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:13.859861 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:22:13.864709 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:22:13.864829 systemd[1]: Reached target local-fs.target. Feb 12 19:22:13.869359 systemd[1]: Reached target machines.target. Feb 12 19:22:13.875574 systemd[1]: Starting ldconfig.service... Feb 12 19:22:13.879725 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:22:13.879830 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:13.880897 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:22:13.886189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:22:13.893501 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:22:13.898342 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:13.898394 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:13.899455 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:22:13.912012 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:22:13.944315 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:22:13.945619 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:22:13.949444 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1306 (bootctl) Feb 12 19:22:13.950778 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:22:14.005349 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:22:14.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:14.326814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:22:14.328255 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:22:14.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:14.438169 systemd-fsck[1314]: fsck.fat 4.2 (2021-01-31) Feb 12 19:22:14.438169 systemd-fsck[1314]: /dev/sda1: 236 files, 113719/258078 clusters Feb 12 19:22:14.439776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:22:14.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:14.452948 systemd[1]: Mounting boot.mount... Feb 12 19:22:14.465118 systemd[1]: Mounted boot.mount. Feb 12 19:22:14.476470 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:22:14.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:14.783844 systemd-networkd[1246]: eth0: Gained IPv6LL Feb 12 19:22:14.788771 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:22:14.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.446938 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:22:16.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.453529 systemd[1]: Starting audit-rules.service... Feb 12 19:22:16.456289 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 12 19:22:16.456353 kernel: audit: type=1130 audit(1707765736.451:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.485286 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:22:16.491506 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:22:16.497000 audit: BPF prog-id=27 op=LOAD Feb 12 19:22:16.503767 kernel: audit: type=1334 audit(1707765736.497:164): prog-id=27 op=LOAD Feb 12 19:22:16.504543 systemd[1]: Starting systemd-resolved.service... Feb 12 19:22:16.509000 audit: BPF prog-id=28 op=LOAD Feb 12 19:22:16.515797 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:22:16.517755 kernel: audit: type=1334 audit(1707765736.509:165): prog-id=28 op=LOAD Feb 12 19:22:16.522136 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:22:16.562000 audit[1326]: SYSTEM_BOOT pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.586488 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:22:16.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.592417 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:22:16.593111 kernel: audit: type=1127 audit(1707765736.562:166): pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.593514 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:22:16.612763 kernel: audit: type=1130 audit(1707765736.591:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.633756 kernel: audit: type=1130 audit(1707765736.614:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.654917 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:22:16.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.659970 systemd[1]: Reached target time-set.target. Feb 12 19:22:16.684051 kernel: audit: type=1130 audit(1707765736.659:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.720214 systemd-resolved[1324]: Positive Trust Anchors: Feb 12 19:22:16.720228 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:22:16.720254 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:22:16.735911 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:22:16.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.762774 kernel: audit: type=1130 audit(1707765736.741:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.763638 systemd-resolved[1324]: Using system hostname 'ci-3510.3.2-a-c97c98db58'. Feb 12 19:22:16.764996 systemd[1]: Started systemd-resolved.service. Feb 12 19:22:16.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.769799 systemd[1]: Reached target network.target. Feb 12 19:22:16.793534 systemd[1]: Reached target network-online.target. Feb 12 19:22:16.794763 kernel: audit: type=1130 audit(1707765736.768:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:16.799455 systemd[1]: Reached target nss-lookup.target. Feb 12 19:22:16.942000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:22:16.943795 augenrules[1342]: No rules Feb 12 19:22:16.942000 audit[1342]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd0f56f10 a2=420 a3=0 items=0 ppid=1320 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:16.942000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:22:16.956657 systemd[1]: Finished audit-rules.service. Feb 12 19:22:16.956766 kernel: audit: type=1305 audit(1707765736.942:172): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:22:16.971518 systemd-timesyncd[1325]: Contacted time server 45.79.35.159:123 (0.flatcar.pool.ntp.org). Feb 12 19:22:16.971593 systemd-timesyncd[1325]: Initial clock synchronization to Mon 2024-02-12 19:22:16.962124 UTC. Feb 12 19:22:23.551279 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:22:23.566221 systemd[1]: Finished ldconfig.service. Feb 12 19:22:23.572302 systemd[1]: Starting systemd-update-done.service... Feb 12 19:22:23.607765 systemd[1]: Finished systemd-update-done.service. Feb 12 19:22:23.612913 systemd[1]: Reached target sysinit.target. Feb 12 19:22:23.617425 systemd[1]: Started motdgen.path. Feb 12 19:22:23.621283 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:22:23.628618 systemd[1]: Started logrotate.timer. Feb 12 19:22:23.632964 systemd[1]: Started mdadm.timer. Feb 12 19:22:23.636745 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:22:23.641997 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:22:23.642030 systemd[1]: Reached target paths.target. Feb 12 19:22:23.646311 systemd[1]: Reached target timers.target. Feb 12 19:22:23.651359 systemd[1]: Listening on dbus.socket. Feb 12 19:22:23.656756 systemd[1]: Starting docker.socket... Feb 12 19:22:23.663031 systemd[1]: Listening on sshd.socket. Feb 12 19:22:23.667167 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:23.667611 systemd[1]: Listening on docker.socket. Feb 12 19:22:23.672158 systemd[1]: Reached target sockets.target. Feb 12 19:22:23.676965 systemd[1]: Reached target basic.target. Feb 12 19:22:23.681378 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:23.681405 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:23.682419 systemd[1]: Starting containerd.service... Feb 12 19:22:23.687153 systemd[1]: Starting dbus.service... Feb 12 19:22:23.691788 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:22:23.697294 systemd[1]: Starting extend-filesystems.service... Feb 12 19:22:23.701606 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:22:23.702614 systemd[1]: Starting motdgen.service... Feb 12 19:22:23.707246 systemd[1]: Started nvidia.service. Feb 12 19:22:23.712237 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:22:23.717411 systemd[1]: Starting prepare-critools.service... Feb 12 19:22:23.723076 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:22:23.728596 systemd[1]: Starting sshd-keygen.service... Feb 12 19:22:23.734313 systemd[1]: Starting systemd-logind.service... Feb 12 19:22:23.738546 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:23.738607 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:22:23.739011 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:22:23.739596 systemd[1]: Starting update-engine.service... Feb 12 19:22:23.747067 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:22:23.755133 jq[1352]: false Feb 12 19:22:23.757058 jq[1370]: true Feb 12 19:22:23.758604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:22:23.758779 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:22:23.771081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:22:23.771251 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda1 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda2 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda3 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found usr Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda4 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda6 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda7 Feb 12 19:22:23.793625 extend-filesystems[1353]: Found sda9 Feb 12 19:22:23.793625 extend-filesystems[1353]: Checking size of /dev/sda9 Feb 12 19:22:23.805313 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:22:23.862090 jq[1375]: true Feb 12 19:22:23.805494 systemd[1]: Finished motdgen.service. Feb 12 19:22:23.862272 env[1376]: time="2024-02-12T19:22:23.837922675Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:22:23.848347 systemd-logind[1365]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 12 19:22:23.848535 systemd-logind[1365]: New seat seat0. Feb 12 19:22:23.887770 tar[1373]: ./ Feb 12 19:22:23.887770 tar[1373]: ./loopback Feb 12 19:22:23.889425 env[1376]: time="2024-02-12T19:22:23.889386289Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:22:23.891958 tar[1374]: crictl Feb 12 19:22:23.901079 extend-filesystems[1353]: Old size kept for /dev/sda9 Feb 12 19:22:23.906243 extend-filesystems[1353]: Found sr0 Feb 12 19:22:23.906235 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.912466200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.914936307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.914967695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915172054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915189448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915203042Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915212559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915279772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915475095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:23.923181 env[1376]: time="2024-02-12T19:22:23.915596487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:23.906422 systemd[1]: Finished extend-filesystems.service. Feb 12 19:22:23.923669 env[1376]: time="2024-02-12T19:22:23.915612441Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:22:23.923669 env[1376]: time="2024-02-12T19:22:23.915664181Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:22:23.923669 env[1376]: time="2024-02-12T19:22:23.915675376Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936794020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936842041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936856195Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936889262Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936903177Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936973349Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.936988943Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937313935Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937331208Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937344603Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937358438Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937370433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937489666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:22:23.939768 env[1376]: time="2024-02-12T19:22:23.937557959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:22:23.939555 systemd[1]: Started containerd.service. Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.937809180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.937837729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.937850324Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938003823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938021337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938033292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938046447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938058522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938070437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938081153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938092589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938105783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938235372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938251046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940231 env[1376]: time="2024-02-12T19:22:23.938263321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940580 env[1376]: time="2024-02-12T19:22:23.938274557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:22:23.940580 env[1376]: time="2024-02-12T19:22:23.938288951Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:22:23.940580 env[1376]: time="2024-02-12T19:22:23.938300986Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:22:23.940580 env[1376]: time="2024-02-12T19:22:23.938317620Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:22:23.940580 env[1376]: time="2024-02-12T19:22:23.938351487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.938538613Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.938587754Z" level=info msg="Connect containerd service" Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.938620781Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.939186758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.939397795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.939434260Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:22:23.940806 env[1376]: time="2024-02-12T19:22:23.939481482Z" level=info msg="containerd successfully booted in 0.102174s" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941265739Z" level=info msg="Start subscribing containerd event" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941311001Z" level=info msg="Start recovering state" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941374096Z" level=info msg="Start event monitor" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941393449Z" level=info msg="Start snapshots syncer" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941402965Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:22:23.959129 env[1376]: time="2024-02-12T19:22:23.941410322Z" level=info msg="Start streaming server" Feb 12 19:22:23.961391 tar[1373]: ./bandwidth Feb 12 19:22:23.972612 bash[1409]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:22:23.973389 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:22:24.011375 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:22:24.066580 dbus-daemon[1351]: [system] SELinux support is enabled Feb 12 19:22:24.066753 systemd[1]: Started dbus.service. Feb 12 19:22:24.073026 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:22:24.073057 systemd[1]: Reached target system-config.target. Feb 12 19:22:24.080820 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:22:24.080842 systemd[1]: Reached target user-config.target. Feb 12 19:22:24.087704 tar[1373]: ./ptp Feb 12 19:22:24.087813 dbus-daemon[1351]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:22:24.087990 systemd[1]: Started systemd-logind.service. Feb 12 19:22:24.158435 tar[1373]: ./vlan Feb 12 19:22:24.222349 tar[1373]: ./host-device Feb 12 19:22:24.278820 tar[1373]: ./tuning Feb 12 19:22:24.338440 tar[1373]: ./vrf Feb 12 19:22:24.391433 tar[1373]: ./sbr Feb 12 19:22:24.446921 tar[1373]: ./tap Feb 12 19:22:24.507900 tar[1373]: ./dhcp Feb 12 19:22:24.624174 tar[1373]: ./static Feb 12 19:22:24.657531 tar[1373]: ./firewall Feb 12 19:22:24.657969 systemd[1]: Finished prepare-critools.service. Feb 12 19:22:24.694704 tar[1373]: ./macvlan Feb 12 19:22:24.728606 tar[1373]: ./dummy Feb 12 19:22:24.761878 tar[1373]: ./bridge Feb 12 19:22:24.764185 update_engine[1367]: I0212 19:22:24.748070 1367 main.cc:92] Flatcar Update Engine starting Feb 12 19:22:24.799333 tar[1373]: ./ipvlan Feb 12 19:22:24.820627 systemd[1]: Started update-engine.service. Feb 12 19:22:24.820932 update_engine[1367]: I0212 19:22:24.820672 1367 update_check_scheduler.cc:74] Next update check in 2m45s Feb 12 19:22:24.829012 systemd[1]: Started locksmithd.service. Feb 12 19:22:24.833647 tar[1373]: ./portmap Feb 12 19:22:24.866862 tar[1373]: ./host-local Feb 12 19:22:24.960001 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:22:25.483001 sshd_keygen[1369]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:22:25.500138 systemd[1]: Finished sshd-keygen.service. Feb 12 19:22:25.506643 systemd[1]: Starting issuegen.service... Feb 12 19:22:25.512485 systemd[1]: Started waagent.service. Feb 12 19:22:25.517585 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:22:25.517810 systemd[1]: Finished issuegen.service. Feb 12 19:22:25.524794 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:22:25.537334 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:22:25.544235 systemd[1]: Started getty@tty1.service. Feb 12 19:22:25.550267 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:22:25.558854 systemd[1]: Reached target getty.target. Feb 12 19:22:25.563306 systemd[1]: Reached target multi-user.target. Feb 12 19:22:25.572733 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:22:25.581505 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:22:25.581683 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:22:25.587513 systemd[1]: Startup finished in 748ms (kernel) + 16.408s (initrd) + 26.125s (userspace) = 43.282s. Feb 12 19:22:26.370446 login[1478]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:22:26.371926 login[1479]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:22:26.415901 systemd[1]: Created slice user-500.slice. Feb 12 19:22:26.416966 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:22:26.419046 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:22:26.421502 systemd-logind[1365]: New session 1 of user core. Feb 12 19:22:26.462149 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:22:26.463659 systemd[1]: Starting user@500.service... Feb 12 19:22:26.492649 (systemd)[1482]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:26.698714 systemd[1482]: Queued start job for default target default.target. Feb 12 19:22:26.699596 systemd[1482]: Reached target paths.target. Feb 12 19:22:26.699625 systemd[1482]: Reached target sockets.target. Feb 12 19:22:26.699636 systemd[1482]: Reached target timers.target. Feb 12 19:22:26.699646 systemd[1482]: Reached target basic.target. Feb 12 19:22:26.699690 systemd[1482]: Reached target default.target. Feb 12 19:22:26.699713 systemd[1482]: Startup finished in 201ms. Feb 12 19:22:26.699771 systemd[1]: Started user@500.service. Feb 12 19:22:26.700664 systemd[1]: Started session-1.scope. Feb 12 19:22:27.370937 login[1478]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:22:27.374629 systemd-logind[1365]: New session 2 of user core. Feb 12 19:22:27.375072 systemd[1]: Started session-2.scope. Feb 12 19:22:33.204369 waagent[1476]: 2024-02-12T19:22:33.204255Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:22:33.210941 waagent[1476]: 2024-02-12T19:22:33.210866Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:22:33.215618 waagent[1476]: 2024-02-12T19:22:33.215559Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:22:33.221399 waagent[1476]: 2024-02-12T19:22:33.221237Z INFO Daemon Daemon Run daemon Feb 12 19:22:33.228739 waagent[1476]: 2024-02-12T19:22:33.226047Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:22:33.243846 waagent[1476]: 2024-02-12T19:22:33.243701Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:22:33.259707 waagent[1476]: 2024-02-12T19:22:33.259573Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:22:33.270816 waagent[1476]: 2024-02-12T19:22:33.270723Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:22:33.275924 waagent[1476]: 2024-02-12T19:22:33.275860Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:22:33.281887 waagent[1476]: 2024-02-12T19:22:33.281826Z INFO Daemon Daemon Activate resource disk Feb 12 19:22:33.286751 waagent[1476]: 2024-02-12T19:22:33.286686Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:22:33.301572 waagent[1476]: 2024-02-12T19:22:33.301501Z INFO Daemon Daemon Found device: None Feb 12 19:22:33.306289 waagent[1476]: 2024-02-12T19:22:33.306225Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:22:33.315041 waagent[1476]: 2024-02-12T19:22:33.314977Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:22:33.327240 waagent[1476]: 2024-02-12T19:22:33.327177Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:22:33.333400 waagent[1476]: 2024-02-12T19:22:33.333340Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:22:33.347061 waagent[1476]: 2024-02-12T19:22:33.346928Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:22:33.362589 waagent[1476]: 2024-02-12T19:22:33.362455Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:22:33.372457 waagent[1476]: 2024-02-12T19:22:33.372386Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:22:33.378195 waagent[1476]: 2024-02-12T19:22:33.378127Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:22:33.455130 waagent[1476]: 2024-02-12T19:22:33.454930Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:22:33.569579 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:22:33.593019 waagent[1476]: 2024-02-12T19:22:33.592877Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:22:33.598437 waagent[1476]: 2024-02-12T19:22:33.598360Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:22:33.604865 waagent[1476]: 2024-02-12T19:22:33.604798Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:22:33.612339 waagent[1476]: 2024-02-12T19:22:33.612274Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:22:33.618241 waagent[1476]: 2024-02-12T19:22:33.618176Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:22:33.623914 waagent[1476]: 2024-02-12T19:22:33.623835Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:22:33.758533 waagent[1476]: 2024-02-12T19:22:33.758416Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:22:33.766020 waagent[1476]: 2024-02-12T19:22:33.765977Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:22:33.772014 waagent[1476]: 2024-02-12T19:22:33.771951Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:22:34.302103 waagent[1476]: 2024-02-12T19:22:34.301951Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:22:34.317988 waagent[1476]: 2024-02-12T19:22:34.317909Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:22:34.326304 waagent[1476]: 2024-02-12T19:22:34.326229Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:22:34.402171 waagent[1476]: 2024-02-12T19:22:34.402035Z INFO Daemon Daemon Found private key matching thumbprint 26075F299CDB09B116B76A2EEF49E07C953E01C1 Feb 12 19:22:34.411362 waagent[1476]: 2024-02-12T19:22:34.411281Z INFO Daemon Daemon Certificate with thumbprint 7BF85127363837C09E584FA3101DB31EACCDE09B has no matching private key. Feb 12 19:22:34.421902 waagent[1476]: 2024-02-12T19:22:34.421828Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:22:34.448555 waagent[1476]: 2024-02-12T19:22:34.448497Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: cb00ebb3-2d74-4b47-a8cb-0225979c155a New eTag: 12394326093718117271] Feb 12 19:22:34.459794 waagent[1476]: 2024-02-12T19:22:34.459699Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:22:34.477368 waagent[1476]: 2024-02-12T19:22:34.477292Z INFO Daemon Daemon Starting provisioning Feb 12 19:22:34.482832 waagent[1476]: 2024-02-12T19:22:34.482760Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:22:34.487989 waagent[1476]: 2024-02-12T19:22:34.487923Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-c97c98db58] Feb 12 19:22:34.529288 waagent[1476]: 2024-02-12T19:22:34.529145Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-c97c98db58] Feb 12 19:22:34.536218 waagent[1476]: 2024-02-12T19:22:34.536135Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:22:34.543324 waagent[1476]: 2024-02-12T19:22:34.543253Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:22:34.559832 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:22:34.559999 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:22:34.560057 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:22:34.560293 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:22:34.566791 systemd-networkd[1246]: eth0: DHCPv6 lease lost Feb 12 19:22:34.568012 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:22:34.568178 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:22:34.570215 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:34.596662 systemd-networkd[1526]: enP51541s1: Link UP Feb 12 19:22:34.596673 systemd-networkd[1526]: enP51541s1: Gained carrier Feb 12 19:22:34.597641 systemd-networkd[1526]: eth0: Link UP Feb 12 19:22:34.597652 systemd-networkd[1526]: eth0: Gained carrier Feb 12 19:22:34.598135 systemd-networkd[1526]: lo: Link UP Feb 12 19:22:34.598144 systemd-networkd[1526]: lo: Gained carrier Feb 12 19:22:34.598369 systemd-networkd[1526]: eth0: Gained IPv6LL Feb 12 19:22:34.598572 systemd-networkd[1526]: Enumeration completed Feb 12 19:22:34.598676 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:34.600378 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:22:34.607013 waagent[1476]: 2024-02-12T19:22:34.600817Z INFO Daemon Daemon Create user account if not exists Feb 12 19:22:34.607669 waagent[1476]: 2024-02-12T19:22:34.607578Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:22:34.613906 systemd-networkd[1526]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:34.614774 waagent[1476]: 2024-02-12T19:22:34.614647Z INFO Daemon Daemon Configure sudoer Feb 12 19:22:34.620116 waagent[1476]: 2024-02-12T19:22:34.620040Z INFO Daemon Daemon Configure sshd Feb 12 19:22:34.624888 waagent[1476]: 2024-02-12T19:22:34.624818Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:22:34.639860 systemd-networkd[1526]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:22:34.641802 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:22:35.874219 waagent[1476]: 2024-02-12T19:22:35.874139Z INFO Daemon Daemon Provisioning complete Feb 12 19:22:35.894350 waagent[1476]: 2024-02-12T19:22:35.894284Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:22:35.903713 waagent[1476]: 2024-02-12T19:22:35.903615Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:22:35.916843 waagent[1476]: 2024-02-12T19:22:35.916755Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:22:36.221072 waagent[1535]: 2024-02-12T19:22:36.220978Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:22:36.221856 waagent[1535]: 2024-02-12T19:22:36.221796Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:36.221993 waagent[1535]: 2024-02-12T19:22:36.221947Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:36.234245 waagent[1535]: 2024-02-12T19:22:36.234168Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:22:36.234430 waagent[1535]: 2024-02-12T19:22:36.234382Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:22:36.303561 waagent[1535]: 2024-02-12T19:22:36.303422Z INFO ExtHandler ExtHandler Found private key matching thumbprint 26075F299CDB09B116B76A2EEF49E07C953E01C1 Feb 12 19:22:36.303806 waagent[1535]: 2024-02-12T19:22:36.303715Z INFO ExtHandler ExtHandler Certificate with thumbprint 7BF85127363837C09E584FA3101DB31EACCDE09B has no matching private key. Feb 12 19:22:36.304052 waagent[1535]: 2024-02-12T19:22:36.304001Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:22:36.318221 waagent[1535]: 2024-02-12T19:22:36.318164Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 4f0a4c62-09bf-4c4a-b527-40276f999e7b New eTag: 12394326093718117271] Feb 12 19:22:36.318864 waagent[1535]: 2024-02-12T19:22:36.318805Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:22:36.363580 waagent[1535]: 2024-02-12T19:22:36.363442Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:22:36.396397 waagent[1535]: 2024-02-12T19:22:36.396313Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1535 Feb 12 19:22:36.400177 waagent[1535]: 2024-02-12T19:22:36.400110Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:22:36.401521 waagent[1535]: 2024-02-12T19:22:36.401456Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:22:36.571643 waagent[1535]: 2024-02-12T19:22:36.571531Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:22:36.572276 waagent[1535]: 2024-02-12T19:22:36.572215Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:22:36.579974 waagent[1535]: 2024-02-12T19:22:36.579919Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:22:36.580606 waagent[1535]: 2024-02-12T19:22:36.580552Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:22:36.581881 waagent[1535]: 2024-02-12T19:22:36.581820Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:22:36.583329 waagent[1535]: 2024-02-12T19:22:36.583258Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:22:36.583597 waagent[1535]: 2024-02-12T19:22:36.583528Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:36.584154 waagent[1535]: 2024-02-12T19:22:36.584083Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:36.584730 waagent[1535]: 2024-02-12T19:22:36.584665Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:22:36.585076 waagent[1535]: 2024-02-12T19:22:36.585014Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:22:36.585076 waagent[1535]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:22:36.585076 waagent[1535]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:22:36.585076 waagent[1535]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:22:36.585076 waagent[1535]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.585076 waagent[1535]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.585076 waagent[1535]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.587397 waagent[1535]: 2024-02-12T19:22:36.587271Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:36.587641 waagent[1535]: 2024-02-12T19:22:36.587567Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:22:36.588357 waagent[1535]: 2024-02-12T19:22:36.588281Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:22:36.588543 waagent[1535]: 2024-02-12T19:22:36.588473Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:22:36.589269 waagent[1535]: 2024-02-12T19:22:36.589178Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:22:36.589720 waagent[1535]: 2024-02-12T19:22:36.589656Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:22:36.589922 waagent[1535]: 2024-02-12T19:22:36.589859Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:22:36.590612 waagent[1535]: 2024-02-12T19:22:36.590551Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:36.591368 waagent[1535]: 2024-02-12T19:22:36.591298Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:22:36.594799 waagent[1535]: 2024-02-12T19:22:36.594713Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:22:36.595835 waagent[1535]: 2024-02-12T19:22:36.595775Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:22:36.605159 waagent[1535]: 2024-02-12T19:22:36.605100Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:22:36.605901 waagent[1535]: 2024-02-12T19:22:36.605852Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:22:36.606916 waagent[1535]: 2024-02-12T19:22:36.606862Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:22:36.631077 waagent[1535]: 2024-02-12T19:22:36.630995Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1526' Feb 12 19:22:36.636654 waagent[1535]: 2024-02-12T19:22:36.636585Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:22:36.722818 waagent[1535]: 2024-02-12T19:22:36.722667Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:22:36.722818 waagent[1535]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:22:36.722818 waagent[1535]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:22:36.722818 waagent[1535]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:7e:be brd ff:ff:ff:ff:ff:ff Feb 12 19:22:36.722818 waagent[1535]: 3: enP51541s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:7e:be brd ff:ff:ff:ff:ff:ff\ altname enP51541p0s2 Feb 12 19:22:36.722818 waagent[1535]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:22:36.722818 waagent[1535]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:22:36.722818 waagent[1535]: 2: eth0 inet 10.200.20.25/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:22:36.722818 waagent[1535]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:22:36.722818 waagent[1535]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:22:36.722818 waagent[1535]: 2: eth0 inet6 fe80::222:48ff:feb6:7ebe/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:22:36.752816 waagent[1535]: 2024-02-12T19:22:36.752720Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:22:36.920580 waagent[1476]: 2024-02-12T19:22:36.920371Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:22:36.924652 waagent[1476]: 2024-02-12T19:22:36.924594Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:22:38.090615 waagent[1564]: 2024-02-12T19:22:38.090509Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:22:38.091341 waagent[1564]: 2024-02-12T19:22:38.091273Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:22:38.091470 waagent[1564]: 2024-02-12T19:22:38.091423Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:22:38.100105 waagent[1564]: 2024-02-12T19:22:38.099975Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:22:38.100515 waagent[1564]: 2024-02-12T19:22:38.100458Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:38.100658 waagent[1564]: 2024-02-12T19:22:38.100611Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:38.113649 waagent[1564]: 2024-02-12T19:22:38.113571Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:22:38.122517 waagent[1564]: 2024-02-12T19:22:38.122458Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:22:38.123575 waagent[1564]: 2024-02-12T19:22:38.123516Z INFO ExtHandler Feb 12 19:22:38.123725 waagent[1564]: 2024-02-12T19:22:38.123676Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b9e1140b-4516-43ea-8031-12944a4aa107 eTag: 12394326093718117271 source: Fabric] Feb 12 19:22:38.124462 waagent[1564]: 2024-02-12T19:22:38.124403Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:22:38.125647 waagent[1564]: 2024-02-12T19:22:38.125586Z INFO ExtHandler Feb 12 19:22:38.125791 waagent[1564]: 2024-02-12T19:22:38.125732Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:22:38.138146 waagent[1564]: 2024-02-12T19:22:38.138096Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:22:38.138631 waagent[1564]: 2024-02-12T19:22:38.138581Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:22:38.159439 waagent[1564]: 2024-02-12T19:22:38.159380Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:22:38.230053 waagent[1564]: 2024-02-12T19:22:38.229882Z INFO ExtHandler Downloaded certificate {'thumbprint': '7BF85127363837C09E584FA3101DB31EACCDE09B', 'hasPrivateKey': False} Feb 12 19:22:38.231332 waagent[1564]: 2024-02-12T19:22:38.231259Z INFO ExtHandler Downloaded certificate {'thumbprint': '26075F299CDB09B116B76A2EEF49E07C953E01C1', 'hasPrivateKey': True} Feb 12 19:22:38.232472 waagent[1564]: 2024-02-12T19:22:38.232407Z INFO ExtHandler Fetch goal state completed Feb 12 19:22:38.256907 waagent[1564]: 2024-02-12T19:22:38.256837Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1564 Feb 12 19:22:38.260390 waagent[1564]: 2024-02-12T19:22:38.260326Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:22:38.261850 waagent[1564]: 2024-02-12T19:22:38.261794Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:22:38.267191 waagent[1564]: 2024-02-12T19:22:38.267125Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:22:38.267588 waagent[1564]: 2024-02-12T19:22:38.267529Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:22:38.275299 waagent[1564]: 2024-02-12T19:22:38.275229Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:22:38.275810 waagent[1564]: 2024-02-12T19:22:38.275730Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:22:38.282094 waagent[1564]: 2024-02-12T19:22:38.281983Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 12 19:22:38.285772 waagent[1564]: 2024-02-12T19:22:38.285691Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:22:38.287294 waagent[1564]: 2024-02-12T19:22:38.287220Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:22:38.287974 waagent[1564]: 2024-02-12T19:22:38.287911Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:38.288227 waagent[1564]: 2024-02-12T19:22:38.288178Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:38.288919 waagent[1564]: 2024-02-12T19:22:38.288851Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:22:38.289509 waagent[1564]: 2024-02-12T19:22:38.289434Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:22:38.290133 waagent[1564]: 2024-02-12T19:22:38.289959Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:22:38.290215 waagent[1564]: 2024-02-12T19:22:38.290130Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:22:38.290444 waagent[1564]: 2024-02-12T19:22:38.290378Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:22:38.290444 waagent[1564]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:22:38.290444 waagent[1564]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:22:38.290444 waagent[1564]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:22:38.290444 waagent[1564]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:38.290444 waagent[1564]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:38.290444 waagent[1564]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:38.290605 waagent[1564]: 2024-02-12T19:22:38.290508Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:38.290713 waagent[1564]: 2024-02-12T19:22:38.290656Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:38.291542 waagent[1564]: 2024-02-12T19:22:38.291461Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:22:38.293832 waagent[1564]: 2024-02-12T19:22:38.293637Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:22:38.294113 waagent[1564]: 2024-02-12T19:22:38.294052Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:22:38.295409 waagent[1564]: 2024-02-12T19:22:38.295342Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:22:38.295627 waagent[1564]: 2024-02-12T19:22:38.295576Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:22:38.297843 waagent[1564]: 2024-02-12T19:22:38.297761Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:22:38.326986 waagent[1564]: 2024-02-12T19:22:38.326902Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:22:38.328253 waagent[1564]: 2024-02-12T19:22:38.328177Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:22:38.328253 waagent[1564]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:22:38.328253 waagent[1564]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:22:38.328253 waagent[1564]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:7e:be brd ff:ff:ff:ff:ff:ff Feb 12 19:22:38.328253 waagent[1564]: 3: enP51541s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:7e:be brd ff:ff:ff:ff:ff:ff\ altname enP51541p0s2 Feb 12 19:22:38.328253 waagent[1564]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:22:38.328253 waagent[1564]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:22:38.328253 waagent[1564]: 2: eth0 inet 10.200.20.25/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:22:38.328253 waagent[1564]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:22:38.328253 waagent[1564]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:22:38.328253 waagent[1564]: 2: eth0 inet6 fe80::222:48ff:feb6:7ebe/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:22:38.330615 waagent[1564]: 2024-02-12T19:22:38.330548Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:22:38.352802 waagent[1564]: 2024-02-12T19:22:38.352688Z INFO ExtHandler ExtHandler Feb 12 19:22:38.353109 waagent[1564]: 2024-02-12T19:22:38.353052Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 652b5c8b-796e-42fd-aeb2-a5daec5860e5 correlation 6e468b93-2568-44c4-8a32-16a709d51116 created: 2024-02-12T19:20:52.352882Z] Feb 12 19:22:38.354095 waagent[1564]: 2024-02-12T19:22:38.354039Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:22:38.356141 waagent[1564]: 2024-02-12T19:22:38.356087Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 12 19:22:38.378988 waagent[1564]: 2024-02-12T19:22:38.378918Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:22:38.393242 waagent[1564]: 2024-02-12T19:22:38.393168Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0FD1C237-E5B9-44F5-9AF8-4B2A79A1ED55;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:22:38.621024 waagent[1564]: 2024-02-12T19:22:38.620852Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 12 19:22:38.621024 waagent[1564]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.621024 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.621024 waagent[1564]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.621024 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.621024 waagent[1564]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.621024 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.621024 waagent[1564]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:22:38.621024 waagent[1564]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:22:38.621024 waagent[1564]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:22:38.628562 waagent[1564]: 2024-02-12T19:22:38.628457Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:22:38.628562 waagent[1564]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.628562 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.628562 waagent[1564]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.628562 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.628562 waagent[1564]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:38.628562 waagent[1564]: pkts bytes target prot opt in out source destination Feb 12 19:22:38.628562 waagent[1564]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:22:38.628562 waagent[1564]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:22:38.628562 waagent[1564]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:22:38.629405 waagent[1564]: 2024-02-12T19:22:38.629358Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:22:59.033716 systemd[1]: Created slice system-sshd.slice. Feb 12 19:22:59.034811 systemd[1]: Started sshd@0-10.200.20.25:22-10.200.12.6:48952.service. Feb 12 19:22:59.758198 sshd[1614]: Accepted publickey for core from 10.200.12.6 port 48952 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:22:59.785901 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:59.790172 systemd[1]: Started session-3.scope. Feb 12 19:22:59.790466 systemd-logind[1365]: New session 3 of user core. Feb 12 19:23:00.123469 systemd[1]: Started sshd@1-10.200.20.25:22-10.200.12.6:48958.service. Feb 12 19:23:00.546638 sshd[1619]: Accepted publickey for core from 10.200.12.6 port 48958 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:00.547947 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:00.551716 systemd-logind[1365]: New session 4 of user core. Feb 12 19:23:00.552186 systemd[1]: Started session-4.scope. Feb 12 19:23:00.851617 sshd[1619]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:00.854036 systemd[1]: sshd@1-10.200.20.25:22-10.200.12.6:48958.service: Deactivated successfully. Feb 12 19:23:00.854705 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:23:00.855246 systemd-logind[1365]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:23:00.856173 systemd-logind[1365]: Removed session 4. Feb 12 19:23:00.905805 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 12 19:23:00.924401 systemd[1]: Started sshd@2-10.200.20.25:22-10.200.12.6:48964.service. Feb 12 19:23:01.347663 sshd[1625]: Accepted publickey for core from 10.200.12.6 port 48964 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:01.348902 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:01.352640 systemd-logind[1365]: New session 5 of user core. Feb 12 19:23:01.353098 systemd[1]: Started session-5.scope. Feb 12 19:23:01.649337 sshd[1625]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:01.651670 systemd[1]: sshd@2-10.200.20.25:22-10.200.12.6:48964.service: Deactivated successfully. Feb 12 19:23:01.652371 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:23:01.652872 systemd-logind[1365]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:23:01.653624 systemd-logind[1365]: Removed session 5. Feb 12 19:23:01.718112 systemd[1]: Started sshd@3-10.200.20.25:22-10.200.12.6:48976.service. Feb 12 19:23:02.133477 sshd[1631]: Accepted publickey for core from 10.200.12.6 port 48976 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:02.134683 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:02.138838 systemd[1]: Started session-6.scope. Feb 12 19:23:02.139129 systemd-logind[1365]: New session 6 of user core. Feb 12 19:23:02.434785 sshd[1631]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:02.437550 systemd[1]: sshd@3-10.200.20.25:22-10.200.12.6:48976.service: Deactivated successfully. Feb 12 19:23:02.437860 systemd-logind[1365]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:23:02.438183 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:23:02.438965 systemd-logind[1365]: Removed session 6. Feb 12 19:23:02.513194 systemd[1]: Started sshd@4-10.200.20.25:22-10.200.12.6:48982.service. Feb 12 19:23:02.962111 sshd[1637]: Accepted publickey for core from 10.200.12.6 port 48982 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:02.963367 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:02.966786 systemd-logind[1365]: New session 7 of user core. Feb 12 19:23:02.967417 systemd[1]: Started session-7.scope. Feb 12 19:23:03.708761 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:23:03.708965 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:04.686666 systemd[1]: Reloading. Feb 12 19:23:04.745450 /usr/lib/systemd/system-generators/torcx-generator[1670]: time="2024-02-12T19:23:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:04.745809 /usr/lib/systemd/system-generators/torcx-generator[1670]: time="2024-02-12T19:23:04Z" level=info msg="torcx already run" Feb 12 19:23:04.831561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:04.831581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:04.848493 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:04.926107 systemd[1]: Started kubelet.service. Feb 12 19:23:04.938014 systemd[1]: Starting coreos-metadata.service... Feb 12 19:23:04.974904 coreos-metadata[1736]: Feb 12 19:23:04.974 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:23:04.977345 coreos-metadata[1736]: Feb 12 19:23:04.977 INFO Fetch successful Feb 12 19:23:04.977475 coreos-metadata[1736]: Feb 12 19:23:04.977 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 12 19:23:04.978928 coreos-metadata[1736]: Feb 12 19:23:04.978 INFO Fetch successful Feb 12 19:23:04.979267 coreos-metadata[1736]: Feb 12 19:23:04.979 INFO Fetching http://168.63.129.16/machine/b58df8a1-a495-49ff-8cbb-92ce148a16ec/d27683be%2D3640%2D41a9%2Db946%2D82c5d1a274cb.%5Fci%2D3510.3.2%2Da%2Dc97c98db58?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 12 19:23:04.980724 coreos-metadata[1736]: Feb 12 19:23:04.980 INFO Fetch successful Feb 12 19:23:04.999443 kubelet[1729]: E0212 19:23:04.999386 1729 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:23:05.001805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:23:05.001937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:23:05.019648 coreos-metadata[1736]: Feb 12 19:23:05.019 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:23:05.034266 coreos-metadata[1736]: Feb 12 19:23:05.034 INFO Fetch successful Feb 12 19:23:05.042476 systemd[1]: Finished coreos-metadata.service. Feb 12 19:23:08.874380 systemd[1]: Stopped kubelet.service. Feb 12 19:23:08.888147 systemd[1]: Reloading. Feb 12 19:23:08.981066 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2024-02-12T19:23:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:08.981096 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2024-02-12T19:23:08Z" level=info msg="torcx already run" Feb 12 19:23:09.047771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:09.047789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:09.064890 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:09.166365 systemd[1]: Started kubelet.service. Feb 12 19:23:09.220871 kubelet[1853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:09.220871 kubelet[1853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:09.220871 kubelet[1853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:09.221258 kubelet[1853]: I0212 19:23:09.220912 1853 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:23:10.113474 kubelet[1853]: I0212 19:23:10.113445 1853 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:23:10.113676 kubelet[1853]: I0212 19:23:10.113665 1853 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:23:10.113957 kubelet[1853]: I0212 19:23:10.113941 1853 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:23:10.116847 kubelet[1853]: I0212 19:23:10.116824 1853 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:23:10.118948 kubelet[1853]: W0212 19:23:10.118924 1853 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:23:10.119753 kubelet[1853]: I0212 19:23:10.119715 1853 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:23:10.119972 kubelet[1853]: I0212 19:23:10.119952 1853 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:23:10.120042 kubelet[1853]: I0212 19:23:10.120028 1853 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:23:10.120127 kubelet[1853]: I0212 19:23:10.120047 1853 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:23:10.120127 kubelet[1853]: I0212 19:23:10.120059 1853 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:23:10.120184 kubelet[1853]: I0212 19:23:10.120150 1853 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:10.127217 kubelet[1853]: I0212 19:23:10.127196 1853 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:23:10.127367 kubelet[1853]: I0212 19:23:10.127357 1853 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:23:10.127444 kubelet[1853]: I0212 19:23:10.127435 1853 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:23:10.127499 kubelet[1853]: I0212 19:23:10.127491 1853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:23:10.128037 kubelet[1853]: E0212 19:23:10.128020 1853 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:10.128135 kubelet[1853]: E0212 19:23:10.128125 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:10.128851 kubelet[1853]: I0212 19:23:10.128823 1853 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:23:10.129278 kubelet[1853]: W0212 19:23:10.129241 1853 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:23:10.129766 kubelet[1853]: I0212 19:23:10.129752 1853 server.go:1168] "Started kubelet" Feb 12 19:23:10.130372 kubelet[1853]: I0212 19:23:10.130351 1853 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:23:10.130630 kubelet[1853]: I0212 19:23:10.130611 1853 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:23:10.130997 kubelet[1853]: I0212 19:23:10.130962 1853 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:23:10.132963 kubelet[1853]: E0212 19:23:10.132946 1853 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:23:10.133054 kubelet[1853]: E0212 19:23:10.133044 1853 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:23:10.139899 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:23:10.140071 kubelet[1853]: I0212 19:23:10.140046 1853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:23:10.143815 kubelet[1853]: I0212 19:23:10.143791 1853 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:23:10.145655 kubelet[1853]: I0212 19:23:10.145626 1853 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:23:10.160201 kubelet[1853]: W0212 19:23:10.160184 1853 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:10.160326 kubelet[1853]: E0212 19:23:10.160316 1853 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:10.160540 kubelet[1853]: E0212 19:23:10.160460 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f0203efc09", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 129716233, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 129716233, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.160839 kubelet[1853]: W0212 19:23:10.160823 1853 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:10.160946 kubelet[1853]: E0212 19:23:10.160935 1853 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:10.161159 kubelet[1853]: W0212 19:23:10.161131 1853 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:10.161159 kubelet[1853]: E0212 19:23:10.161152 1853 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:10.161385 kubelet[1853]: E0212 19:23:10.161367 1853 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.25\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 19:23:10.161822 kubelet[1853]: E0212 19:23:10.161767 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f020718f9a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 133030810, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 133030810, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.165393 kubelet[1853]: I0212 19:23:10.165371 1853 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:23:10.165524 kubelet[1853]: I0212 19:23:10.165512 1853 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:23:10.165609 kubelet[1853]: I0212 19:23:10.165599 1853 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:10.165883 kubelet[1853]: E0212 19:23:10.165598 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022523065", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164529253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164529253, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.166579 kubelet[1853]: E0212 19:23:10.166524 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022527a9d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164548253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164548253, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.167309 kubelet[1853]: E0212 19:23:10.167255 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022528795", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164551573, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164551573, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.171397 kubelet[1853]: I0212 19:23:10.171368 1853 policy_none.go:49] "None policy: Start" Feb 12 19:23:10.172109 kubelet[1853]: I0212 19:23:10.172093 1853 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:23:10.172218 kubelet[1853]: I0212 19:23:10.172208 1853 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:23:10.186933 systemd[1]: Created slice kubepods.slice. Feb 12 19:23:10.190919 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:23:10.193583 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:23:10.202437 kubelet[1853]: I0212 19:23:10.202409 1853 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:23:10.202786 kubelet[1853]: I0212 19:23:10.202664 1853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:23:10.203695 kubelet[1853]: E0212 19:23:10.203664 1853 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.25\" not found" Feb 12 19:23:10.205931 kubelet[1853]: E0212 19:23:10.205847 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f024bb2667", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 204962407, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 204962407, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.244931 kubelet[1853]: I0212 19:23:10.244895 1853 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.25" Feb 12 19:23:10.248583 kubelet[1853]: E0212 19:23:10.248555 1853 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.25" Feb 12 19:23:10.248682 kubelet[1853]: E0212 19:23:10.248614 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022523065", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164529253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 244855890, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022523065" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.249942 kubelet[1853]: E0212 19:23:10.249881 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022527a9d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164548253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 244864490, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022527a9d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.251852 kubelet[1853]: E0212 19:23:10.251788 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022528795", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164551573, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 244869650, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022528795" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.278446 kubelet[1853]: I0212 19:23:10.277766 1853 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:23:10.278585 kubelet[1853]: I0212 19:23:10.278547 1853 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:23:10.278585 kubelet[1853]: I0212 19:23:10.278582 1853 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:23:10.278647 kubelet[1853]: I0212 19:23:10.278605 1853 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:23:10.278647 kubelet[1853]: E0212 19:23:10.278645 1853 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:23:10.285755 kubelet[1853]: W0212 19:23:10.285713 1853 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:10.285932 kubelet[1853]: E0212 19:23:10.285918 1853 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:10.305748 update_engine[1367]: I0212 19:23:10.305344 1367 update_attempter.cc:509] Updating boot flags... Feb 12 19:23:10.362935 kubelet[1853]: E0212 19:23:10.362903 1853 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.25\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 19:23:10.453357 kubelet[1853]: I0212 19:23:10.452913 1853 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.25" Feb 12 19:23:10.454626 kubelet[1853]: E0212 19:23:10.454353 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022523065", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164529253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 452702150, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022523065" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.454626 kubelet[1853]: E0212 19:23:10.454609 1853 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.25" Feb 12 19:23:10.455353 kubelet[1853]: E0212 19:23:10.455108 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022527a9d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164548253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 452707870, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022527a9d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.455895 kubelet[1853]: E0212 19:23:10.455831 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022528795", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164551573, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 452710670, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022528795" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.765220 kubelet[1853]: E0212 19:23:10.764768 1853 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.25\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 19:23:10.856010 kubelet[1853]: I0212 19:23:10.855797 1853 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.25" Feb 12 19:23:10.856774 kubelet[1853]: E0212 19:23:10.856729 1853 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.25" Feb 12 19:23:10.857214 kubelet[1853]: E0212 19:23:10.856941 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022523065", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164529253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 855755750, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022523065" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.857867 kubelet[1853]: E0212 19:23:10.857625 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022527a9d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164548253, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 855766710, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022527a9d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:10.858328 kubelet[1853]: E0212 19:23:10.858265 1853 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.25.17b333f022528795", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.25", UID:"10.200.20.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.25"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 164551573, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 855769310, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.25.17b333f022528795" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:11.116726 kubelet[1853]: I0212 19:23:11.116154 1853 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:23:11.128437 kubelet[1853]: E0212 19:23:11.128402 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:11.519384 kubelet[1853]: E0212 19:23:11.519358 1853 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.25" not found Feb 12 19:23:11.568444 kubelet[1853]: E0212 19:23:11.568394 1853 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.25\" not found" node="10.200.20.25" Feb 12 19:23:11.658242 kubelet[1853]: I0212 19:23:11.658214 1853 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.25" Feb 12 19:23:11.662103 kubelet[1853]: I0212 19:23:11.662083 1853 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.25" Feb 12 19:23:11.673369 kubelet[1853]: I0212 19:23:11.673332 1853 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:23:11.673666 env[1376]: time="2024-02-12T19:23:11.673613906Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:23:11.674094 kubelet[1853]: I0212 19:23:11.674080 1853 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:23:11.695555 sudo[1640]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:11.794383 sshd[1637]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:11.797275 systemd[1]: sshd@4-10.200.20.25:22-10.200.12.6:48982.service: Deactivated successfully. Feb 12 19:23:11.797997 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:23:11.798573 systemd-logind[1365]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:23:11.799399 systemd-logind[1365]: Removed session 7. Feb 12 19:23:12.129089 kubelet[1853]: I0212 19:23:12.128690 1853 apiserver.go:52] "Watching apiserver" Feb 12 19:23:12.129304 kubelet[1853]: E0212 19:23:12.128785 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:12.131487 kubelet[1853]: I0212 19:23:12.131463 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:23:12.131560 kubelet[1853]: I0212 19:23:12.131556 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:23:12.136703 systemd[1]: Created slice kubepods-burstable-pod43e0018c_55d3_49f9_991d_b25ef48b639f.slice. Feb 12 19:23:12.146318 kubelet[1853]: I0212 19:23:12.146288 1853 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:23:12.151323 systemd[1]: Created slice kubepods-besteffort-pod80769fa4_ab6f_4790_b893_243386d8c810.slice. Feb 12 19:23:12.157180 kubelet[1853]: I0212 19:23:12.157153 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-cgroup\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157265 kubelet[1853]: I0212 19:23:12.157192 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-lib-modules\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157265 kubelet[1853]: I0212 19:23:12.157216 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-net\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157265 kubelet[1853]: I0212 19:23:12.157235 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-kernel\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157265 kubelet[1853]: I0212 19:23:12.157254 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80769fa4-ab6f-4790-b893-243386d8c810-xtables-lock\") pod \"kube-proxy-6n4qk\" (UID: \"80769fa4-ab6f-4790-b893-243386d8c810\") " pod="kube-system/kube-proxy-6n4qk" Feb 12 19:23:12.157382 kubelet[1853]: I0212 19:23:12.157272 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80769fa4-ab6f-4790-b893-243386d8c810-lib-modules\") pod \"kube-proxy-6n4qk\" (UID: \"80769fa4-ab6f-4790-b893-243386d8c810\") " pod="kube-system/kube-proxy-6n4qk" Feb 12 19:23:12.157382 kubelet[1853]: I0212 19:23:12.157290 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-bpf-maps\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157382 kubelet[1853]: I0212 19:23:12.157308 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-config-path\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157382 kubelet[1853]: I0212 19:23:12.157329 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw649\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-kube-api-access-jw649\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157382 kubelet[1853]: I0212 19:23:12.157348 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80769fa4-ab6f-4790-b893-243386d8c810-kube-proxy\") pod \"kube-proxy-6n4qk\" (UID: \"80769fa4-ab6f-4790-b893-243386d8c810\") " pod="kube-system/kube-proxy-6n4qk" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157370 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz9c\" (UniqueName: \"kubernetes.io/projected/80769fa4-ab6f-4790-b893-243386d8c810-kube-api-access-7bz9c\") pod \"kube-proxy-6n4qk\" (UID: \"80769fa4-ab6f-4790-b893-243386d8c810\") " pod="kube-system/kube-proxy-6n4qk" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157393 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e0018c-55d3-49f9-991d-b25ef48b639f-clustermesh-secrets\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157411 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-hubble-tls\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157434 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-run\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157457 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-hostproc\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157491 kubelet[1853]: I0212 19:23:12.157474 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cni-path\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157613 kubelet[1853]: I0212 19:23:12.157493 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-etc-cni-netd\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157613 kubelet[1853]: I0212 19:23:12.157510 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-xtables-lock\") pod \"cilium-rprmz\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " pod="kube-system/cilium-rprmz" Feb 12 19:23:12.157613 kubelet[1853]: I0212 19:23:12.157518 1853 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:23:12.450573 env[1376]: time="2024-02-12T19:23:12.450523713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rprmz,Uid:43e0018c-55d3-49f9-991d-b25ef48b639f,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:12.461216 env[1376]: time="2024-02-12T19:23:12.461173336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6n4qk,Uid:80769fa4-ab6f-4790-b893-243386d8c810,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:13.130139 kubelet[1853]: E0212 19:23:13.130102 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:13.981187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133352340.mount: Deactivated successfully. Feb 12 19:23:14.002561 env[1376]: time="2024-02-12T19:23:14.002517900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.008028 env[1376]: time="2024-02-12T19:23:14.007996780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.013280 env[1376]: time="2024-02-12T19:23:14.013250183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.019827 env[1376]: time="2024-02-12T19:23:14.019787687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.022681 env[1376]: time="2024-02-12T19:23:14.022642605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.025622 env[1376]: time="2024-02-12T19:23:14.025597162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.028969 env[1376]: time="2024-02-12T19:23:14.028942633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.031063 env[1376]: time="2024-02-12T19:23:14.031027163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:14.096437 env[1376]: time="2024-02-12T19:23:14.093229892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:14.096437 env[1376]: time="2024-02-12T19:23:14.093264451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:14.096437 env[1376]: time="2024-02-12T19:23:14.093274091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:14.096437 env[1376]: time="2024-02-12T19:23:14.093368650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1cc3035892db9e866593a48592ee631079e59975fe7f172f268ca1d84107835 pid=1934 runtime=io.containerd.runc.v2 Feb 12 19:23:14.097334 env[1376]: time="2024-02-12T19:23:14.096510164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:14.097440 env[1376]: time="2024-02-12T19:23:14.096560603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:14.097440 env[1376]: time="2024-02-12T19:23:14.096580323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:14.097440 env[1376]: time="2024-02-12T19:23:14.096778800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212 pid=1953 runtime=io.containerd.runc.v2 Feb 12 19:23:14.112817 systemd[1]: Started cri-containerd-d1cc3035892db9e866593a48592ee631079e59975fe7f172f268ca1d84107835.scope. Feb 12 19:23:14.118276 systemd[1]: Started cri-containerd-a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212.scope. Feb 12 19:23:14.131015 kubelet[1853]: E0212 19:23:14.130970 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:14.154363 env[1376]: time="2024-02-12T19:23:14.154301677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rprmz,Uid:43e0018c-55d3-49f9-991d-b25ef48b639f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\"" Feb 12 19:23:14.156889 env[1376]: time="2024-02-12T19:23:14.156852640Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:23:14.159904 env[1376]: time="2024-02-12T19:23:14.159860116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6n4qk,Uid:80769fa4-ab6f-4790-b893-243386d8c810,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1cc3035892db9e866593a48592ee631079e59975fe7f172f268ca1d84107835\"" Feb 12 19:23:15.131612 kubelet[1853]: E0212 19:23:15.131583 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:16.131877 kubelet[1853]: E0212 19:23:16.131841 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:17.132731 kubelet[1853]: E0212 19:23:17.132692 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:18.133555 kubelet[1853]: E0212 19:23:18.133507 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:18.786170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744020087.mount: Deactivated successfully. Feb 12 19:23:19.134215 kubelet[1853]: E0212 19:23:19.133976 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:20.135112 kubelet[1853]: E0212 19:23:20.135071 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:20.861561 env[1376]: time="2024-02-12T19:23:20.861508784Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:20.867146 env[1376]: time="2024-02-12T19:23:20.867104089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:20.870810 env[1376]: time="2024-02-12T19:23:20.870772492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:20.871433 env[1376]: time="2024-02-12T19:23:20.871404326Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:23:20.872613 env[1376]: time="2024-02-12T19:23:20.872581074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:23:20.873667 env[1376]: time="2024-02-12T19:23:20.873623184Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:23:20.897628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4071346004.mount: Deactivated successfully. Feb 12 19:23:20.902489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899059215.mount: Deactivated successfully. Feb 12 19:23:20.928706 env[1376]: time="2024-02-12T19:23:20.928636557Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\"" Feb 12 19:23:20.929427 env[1376]: time="2024-02-12T19:23:20.929391749Z" level=info msg="StartContainer for \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\"" Feb 12 19:23:20.947587 systemd[1]: Started cri-containerd-eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6.scope. Feb 12 19:23:20.974929 env[1376]: time="2024-02-12T19:23:20.974880377Z" level=info msg="StartContainer for \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\" returns successfully" Feb 12 19:23:20.981541 systemd[1]: cri-containerd-eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6.scope: Deactivated successfully. Feb 12 19:23:21.161116 kubelet[1853]: E0212 19:23:21.135324 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:21.896107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6-rootfs.mount: Deactivated successfully. Feb 12 19:23:22.135958 kubelet[1853]: E0212 19:23:22.135924 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:22.838169 env[1376]: time="2024-02-12T19:23:22.837948197Z" level=info msg="shim disconnected" id=eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6 Feb 12 19:23:22.838169 env[1376]: time="2024-02-12T19:23:22.838000345Z" level=warning msg="cleaning up after shim disconnected" id=eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6 namespace=k8s.io Feb 12 19:23:22.838169 env[1376]: time="2024-02-12T19:23:22.838009343Z" level=info msg="cleaning up dead shim" Feb 12 19:23:22.844247 env[1376]: time="2024-02-12T19:23:22.844209826Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2059 runtime=io.containerd.runc.v2\n" Feb 12 19:23:23.136469 kubelet[1853]: E0212 19:23:23.136367 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:23.299922 env[1376]: time="2024-02-12T19:23:23.299881224Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:23:23.324165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182190761.mount: Deactivated successfully. Feb 12 19:23:23.329519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881478246.mount: Deactivated successfully. Feb 12 19:23:23.348964 env[1376]: time="2024-02-12T19:23:23.348913052Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\"" Feb 12 19:23:23.350220 env[1376]: time="2024-02-12T19:23:23.350186725Z" level=info msg="StartContainer for \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\"" Feb 12 19:23:23.373772 systemd[1]: Started cri-containerd-3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd.scope. Feb 12 19:23:23.417968 env[1376]: time="2024-02-12T19:23:23.417717104Z" level=info msg="StartContainer for \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\" returns successfully" Feb 12 19:23:23.421534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:23:23.421726 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:23:23.421898 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:23:23.423275 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:23:23.426454 systemd[1]: cri-containerd-3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd.scope: Deactivated successfully. Feb 12 19:23:23.434825 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:23:23.482059 env[1376]: time="2024-02-12T19:23:23.481995656Z" level=info msg="shim disconnected" id=3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd Feb 12 19:23:23.482615 env[1376]: time="2024-02-12T19:23:23.482593921Z" level=warning msg="cleaning up after shim disconnected" id=3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd namespace=k8s.io Feb 12 19:23:23.482691 env[1376]: time="2024-02-12T19:23:23.482678382Z" level=info msg="cleaning up dead shim" Feb 12 19:23:23.489541 env[1376]: time="2024-02-12T19:23:23.489488367Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2123 runtime=io.containerd.runc.v2\n" Feb 12 19:23:24.137386 kubelet[1853]: E0212 19:23:24.137346 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:24.170889 env[1376]: time="2024-02-12T19:23:24.170845133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:24.178134 env[1376]: time="2024-02-12T19:23:24.178089665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:24.181120 env[1376]: time="2024-02-12T19:23:24.181081169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:24.183897 env[1376]: time="2024-02-12T19:23:24.183860160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:24.184243 env[1376]: time="2024-02-12T19:23:24.184210644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 12 19:23:24.186152 env[1376]: time="2024-02-12T19:23:24.186114706Z" level=info msg="CreateContainer within sandbox \"d1cc3035892db9e866593a48592ee631079e59975fe7f172f268ca1d84107835\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:23:24.216184 env[1376]: time="2024-02-12T19:23:24.216134967Z" level=info msg="CreateContainer within sandbox \"d1cc3035892db9e866593a48592ee631079e59975fe7f172f268ca1d84107835\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6764817edf2b3f5b12a044e8bcd834b8704c3ea93b04932473030b639a967cba\"" Feb 12 19:23:24.217013 env[1376]: time="2024-02-12T19:23:24.216986461Z" level=info msg="StartContainer for \"6764817edf2b3f5b12a044e8bcd834b8704c3ea93b04932473030b639a967cba\"" Feb 12 19:23:24.231265 systemd[1]: Started cri-containerd-6764817edf2b3f5b12a044e8bcd834b8704c3ea93b04932473030b639a967cba.scope. Feb 12 19:23:24.264905 env[1376]: time="2024-02-12T19:23:24.264854690Z" level=info msg="StartContainer for \"6764817edf2b3f5b12a044e8bcd834b8704c3ea93b04932473030b639a967cba\" returns successfully" Feb 12 19:23:24.305111 env[1376]: time="2024-02-12T19:23:24.305071556Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:23:24.319433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd-rootfs.mount: Deactivated successfully. Feb 12 19:23:24.331372 kubelet[1853]: I0212 19:23:24.331322 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6n4qk" podStartSLOduration=3.308199235 podCreationTimestamp="2024-02-12 19:23:11 +0000 UTC" firstStartedPulling="2024-02-12 19:23:14.161367814 +0000 UTC m=+4.992942668" lastFinishedPulling="2024-02-12 19:23:24.184447112 +0000 UTC m=+15.016021966" observedRunningTime="2024-02-12 19:23:24.331029387 +0000 UTC m=+15.162604281" watchObservedRunningTime="2024-02-12 19:23:24.331278533 +0000 UTC m=+15.162853427" Feb 12 19:23:24.347645 env[1376]: time="2024-02-12T19:23:24.347575961Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\"" Feb 12 19:23:24.348351 env[1376]: time="2024-02-12T19:23:24.348329276Z" level=info msg="StartContainer for \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\"" Feb 12 19:23:24.366643 systemd[1]: Started cri-containerd-0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6.scope. Feb 12 19:23:24.395267 systemd[1]: cri-containerd-0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6.scope: Deactivated successfully. Feb 12 19:23:24.399453 env[1376]: time="2024-02-12T19:23:24.399412081Z" level=info msg="StartContainer for \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\" returns successfully" Feb 12 19:23:24.814315 env[1376]: time="2024-02-12T19:23:24.814268842Z" level=info msg="shim disconnected" id=0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6 Feb 12 19:23:24.814586 env[1376]: time="2024-02-12T19:23:24.814565257Z" level=warning msg="cleaning up after shim disconnected" id=0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6 namespace=k8s.io Feb 12 19:23:24.814665 env[1376]: time="2024-02-12T19:23:24.814652358Z" level=info msg="cleaning up dead shim" Feb 12 19:23:24.821506 env[1376]: time="2024-02-12T19:23:24.821466824Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2290 runtime=io.containerd.runc.v2\n" Feb 12 19:23:25.138192 kubelet[1853]: E0212 19:23:25.138089 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:25.310923 env[1376]: time="2024-02-12T19:23:25.310881721Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:23:25.318762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6-rootfs.mount: Deactivated successfully. Feb 12 19:23:25.333091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162972054.mount: Deactivated successfully. Feb 12 19:23:25.337475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289243768.mount: Deactivated successfully. Feb 12 19:23:25.351932 env[1376]: time="2024-02-12T19:23:25.351889221Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\"" Feb 12 19:23:25.352901 env[1376]: time="2024-02-12T19:23:25.352864174Z" level=info msg="StartContainer for \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\"" Feb 12 19:23:25.366876 systemd[1]: Started cri-containerd-eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f.scope. Feb 12 19:23:25.391361 systemd[1]: cri-containerd-eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f.scope: Deactivated successfully. Feb 12 19:23:25.393444 env[1376]: time="2024-02-12T19:23:25.393376580Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43e0018c_55d3_49f9_991d_b25ef48b639f.slice/cri-containerd-eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f.scope/memory.events\": no such file or directory" Feb 12 19:23:25.398282 env[1376]: time="2024-02-12T19:23:25.398241263Z" level=info msg="StartContainer for \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\" returns successfully" Feb 12 19:23:25.429494 env[1376]: time="2024-02-12T19:23:25.429442094Z" level=info msg="shim disconnected" id=eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f Feb 12 19:23:25.429494 env[1376]: time="2024-02-12T19:23:25.429492123Z" level=warning msg="cleaning up after shim disconnected" id=eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f namespace=k8s.io Feb 12 19:23:25.429494 env[1376]: time="2024-02-12T19:23:25.429501681Z" level=info msg="cleaning up dead shim" Feb 12 19:23:25.435642 env[1376]: time="2024-02-12T19:23:25.435599222Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2395 runtime=io.containerd.runc.v2\n" Feb 12 19:23:26.139246 kubelet[1853]: E0212 19:23:26.139210 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:26.314699 env[1376]: time="2024-02-12T19:23:26.314659243Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:23:26.338271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095023291.mount: Deactivated successfully. Feb 12 19:23:26.344188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101573058.mount: Deactivated successfully. Feb 12 19:23:26.356591 env[1376]: time="2024-02-12T19:23:26.356544962Z" level=info msg="CreateContainer within sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\"" Feb 12 19:23:26.357312 env[1376]: time="2024-02-12T19:23:26.357286968Z" level=info msg="StartContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\"" Feb 12 19:23:26.371389 systemd[1]: Started cri-containerd-b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24.scope. Feb 12 19:23:26.408505 env[1376]: time="2024-02-12T19:23:26.408036090Z" level=info msg="StartContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" returns successfully" Feb 12 19:23:26.484846 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:23:26.524696 kubelet[1853]: I0212 19:23:26.523925 1853 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:23:26.849934 kernel: Initializing XFRM netlink socket Feb 12 19:23:26.857761 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:23:27.140170 kubelet[1853]: E0212 19:23:27.140027 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:27.330986 kubelet[1853]: I0212 19:23:27.330956 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rprmz" podStartSLOduration=9.615031139 podCreationTimestamp="2024-02-12 19:23:11 +0000 UTC" firstStartedPulling="2024-02-12 19:23:14.156093131 +0000 UTC m=+4.987668025" lastFinishedPulling="2024-02-12 19:23:20.8719676 +0000 UTC m=+11.703542534" observedRunningTime="2024-02-12 19:23:27.330369396 +0000 UTC m=+18.161944290" watchObservedRunningTime="2024-02-12 19:23:27.330905648 +0000 UTC m=+18.162480502" Feb 12 19:23:28.140925 kubelet[1853]: E0212 19:23:28.140890 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:28.491048 systemd-networkd[1526]: cilium_host: Link UP Feb 12 19:23:28.491700 systemd-networkd[1526]: cilium_net: Link UP Feb 12 19:23:28.503009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:23:28.503143 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:23:28.505886 systemd-networkd[1526]: cilium_net: Gained carrier Feb 12 19:23:28.507017 systemd-networkd[1526]: cilium_host: Gained carrier Feb 12 19:23:28.633330 systemd-networkd[1526]: cilium_vxlan: Link UP Feb 12 19:23:28.633336 systemd-networkd[1526]: cilium_vxlan: Gained carrier Feb 12 19:23:28.879797 kernel: NET: Registered PF_ALG protocol family Feb 12 19:23:28.951922 systemd-networkd[1526]: cilium_net: Gained IPv6LL Feb 12 19:23:29.141994 kubelet[1853]: E0212 19:23:29.141943 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:29.407921 systemd-networkd[1526]: cilium_host: Gained IPv6LL Feb 12 19:23:29.571149 systemd-networkd[1526]: lxc_health: Link UP Feb 12 19:23:29.590138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:23:29.589660 systemd-networkd[1526]: lxc_health: Gained carrier Feb 12 19:23:30.047905 systemd-networkd[1526]: cilium_vxlan: Gained IPv6LL Feb 12 19:23:30.127673 kubelet[1853]: E0212 19:23:30.127633 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:30.143037 kubelet[1853]: E0212 19:23:30.142997 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:30.623888 systemd-networkd[1526]: lxc_health: Gained IPv6LL Feb 12 19:23:31.143351 kubelet[1853]: E0212 19:23:31.143308 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:31.696392 kubelet[1853]: I0212 19:23:31.696352 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:23:31.701542 systemd[1]: Created slice kubepods-besteffort-pod30f5433a_ce0b_4202_b753_9bff48a5a64a.slice. Feb 12 19:23:31.767113 kubelet[1853]: I0212 19:23:31.767072 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqqd\" (UniqueName: \"kubernetes.io/projected/30f5433a-ce0b-4202-b753-9bff48a5a64a-kube-api-access-dvqqd\") pod \"nginx-deployment-845c78c8b9-jbvw9\" (UID: \"30f5433a-ce0b-4202-b753-9bff48a5a64a\") " pod="default/nginx-deployment-845c78c8b9-jbvw9" Feb 12 19:23:32.006895 env[1376]: time="2024-02-12T19:23:32.006466717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-jbvw9,Uid:30f5433a-ce0b-4202-b753-9bff48a5a64a,Namespace:default,Attempt:0,}" Feb 12 19:23:32.070041 systemd-networkd[1526]: lxc07d13888519e: Link UP Feb 12 19:23:32.081245 kernel: eth0: renamed from tmpfd4a4 Feb 12 19:23:32.096476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:23:32.096597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc07d13888519e: link becomes ready Feb 12 19:23:32.097028 systemd-networkd[1526]: lxc07d13888519e: Gained carrier Feb 12 19:23:32.143875 kubelet[1853]: E0212 19:23:32.143828 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:32.526815 kubelet[1853]: I0212 19:23:32.526780 1853 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:23:33.144150 kubelet[1853]: E0212 19:23:33.144100 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:33.231285 env[1376]: time="2024-02-12T19:23:33.231207640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:33.231285 env[1376]: time="2024-02-12T19:23:33.231248553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:33.231285 env[1376]: time="2024-02-12T19:23:33.231259391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:33.231841 env[1376]: time="2024-02-12T19:23:33.231796139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd4a45a1eaf427dca63f94530ad99b179f810738c5f9ddb0957e81a686850229 pid=2917 runtime=io.containerd.runc.v2 Feb 12 19:23:33.243598 systemd[1]: Started cri-containerd-fd4a45a1eaf427dca63f94530ad99b179f810738c5f9ddb0957e81a686850229.scope. Feb 12 19:23:33.277876 env[1376]: time="2024-02-12T19:23:33.277019882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-jbvw9,Uid:30f5433a-ce0b-4202-b753-9bff48a5a64a,Namespace:default,Attempt:0,} returns sandbox id \"fd4a45a1eaf427dca63f94530ad99b179f810738c5f9ddb0957e81a686850229\"" Feb 12 19:23:33.278881 env[1376]: time="2024-02-12T19:23:33.278847809Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:23:33.888055 systemd-networkd[1526]: lxc07d13888519e: Gained IPv6LL Feb 12 19:23:34.144843 kubelet[1853]: E0212 19:23:34.144728 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:35.145352 kubelet[1853]: E0212 19:23:35.145315 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:35.661620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383464949.mount: Deactivated successfully. Feb 12 19:23:36.146377 kubelet[1853]: E0212 19:23:36.146341 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:36.514944 env[1376]: time="2024-02-12T19:23:36.514890833Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:36.519864 env[1376]: time="2024-02-12T19:23:36.519812976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:36.524208 env[1376]: time="2024-02-12T19:23:36.524171567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:36.528909 env[1376]: time="2024-02-12T19:23:36.528877065Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:36.530394 env[1376]: time="2024-02-12T19:23:36.530350152Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:23:36.532027 env[1376]: time="2024-02-12T19:23:36.531997852Z" level=info msg="CreateContainer within sandbox \"fd4a45a1eaf427dca63f94530ad99b179f810738c5f9ddb0957e81a686850229\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:23:36.557240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740660814.mount: Deactivated successfully. Feb 12 19:23:36.571012 env[1376]: time="2024-02-12T19:23:36.570960900Z" level=info msg="CreateContainer within sandbox \"fd4a45a1eaf427dca63f94530ad99b179f810738c5f9ddb0957e81a686850229\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ae4271e3b140354d8f90cf6a675ffabc557ab5ccf27cccf124144611b1702d27\"" Feb 12 19:23:36.571787 env[1376]: time="2024-02-12T19:23:36.571730939Z" level=info msg="StartContainer for \"ae4271e3b140354d8f90cf6a675ffabc557ab5ccf27cccf124144611b1702d27\"" Feb 12 19:23:36.588185 systemd[1]: Started cri-containerd-ae4271e3b140354d8f90cf6a675ffabc557ab5ccf27cccf124144611b1702d27.scope. Feb 12 19:23:36.619067 env[1376]: time="2024-02-12T19:23:36.618943205Z" level=info msg="StartContainer for \"ae4271e3b140354d8f90cf6a675ffabc557ab5ccf27cccf124144611b1702d27\" returns successfully" Feb 12 19:23:37.146836 kubelet[1853]: E0212 19:23:37.146792 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:37.340220 kubelet[1853]: I0212 19:23:37.340179 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-jbvw9" podStartSLOduration=3.087762429 podCreationTimestamp="2024-02-12 19:23:31 +0000 UTC" firstStartedPulling="2024-02-12 19:23:33.278322979 +0000 UTC m=+24.109897873" lastFinishedPulling="2024-02-12 19:23:36.530702656 +0000 UTC m=+27.362277550" observedRunningTime="2024-02-12 19:23:37.339786321 +0000 UTC m=+28.171361215" watchObservedRunningTime="2024-02-12 19:23:37.340142106 +0000 UTC m=+28.171716960" Feb 12 19:23:38.147553 kubelet[1853]: E0212 19:23:38.147514 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:39.148083 kubelet[1853]: E0212 19:23:39.148043 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:40.148628 kubelet[1853]: E0212 19:23:40.148588 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:40.997353 kubelet[1853]: I0212 19:23:40.997275 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:23:41.001702 systemd[1]: Created slice kubepods-besteffort-poddeaef078_634d_40f7_b3fc_3b3d856f8097.slice. Feb 12 19:23:41.114445 kubelet[1853]: I0212 19:23:41.114410 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/deaef078-634d-40f7-b3fc-3b3d856f8097-data\") pod \"nfs-server-provisioner-0\" (UID: \"deaef078-634d-40f7-b3fc-3b3d856f8097\") " pod="default/nfs-server-provisioner-0" Feb 12 19:23:41.114698 kubelet[1853]: I0212 19:23:41.114654 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj9x\" (UniqueName: \"kubernetes.io/projected/deaef078-634d-40f7-b3fc-3b3d856f8097-kube-api-access-5gj9x\") pod \"nfs-server-provisioner-0\" (UID: \"deaef078-634d-40f7-b3fc-3b3d856f8097\") " pod="default/nfs-server-provisioner-0" Feb 12 19:23:41.149602 kubelet[1853]: E0212 19:23:41.149578 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:41.305122 env[1376]: time="2024-02-12T19:23:41.304663379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:deaef078-634d-40f7-b3fc-3b3d856f8097,Namespace:default,Attempt:0,}" Feb 12 19:23:41.355258 systemd-networkd[1526]: lxca60e0f027827: Link UP Feb 12 19:23:41.365833 kernel: eth0: renamed from tmp021da Feb 12 19:23:41.384144 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:23:41.384243 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca60e0f027827: link becomes ready Feb 12 19:23:41.384503 systemd-networkd[1526]: lxca60e0f027827: Gained carrier Feb 12 19:23:41.567711 env[1376]: time="2024-02-12T19:23:41.567279172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:41.567711 env[1376]: time="2024-02-12T19:23:41.567330444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:41.567711 env[1376]: time="2024-02-12T19:23:41.567340963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:41.568258 env[1376]: time="2024-02-12T19:23:41.568186926Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38 pid=3043 runtime=io.containerd.runc.v2 Feb 12 19:23:41.586281 systemd[1]: Started cri-containerd-021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38.scope. Feb 12 19:23:41.615399 env[1376]: time="2024-02-12T19:23:41.615353234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:deaef078-634d-40f7-b3fc-3b3d856f8097,Namespace:default,Attempt:0,} returns sandbox id \"021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38\"" Feb 12 19:23:41.617671 env[1376]: time="2024-02-12T19:23:41.617641437Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:23:42.150373 kubelet[1853]: E0212 19:23:42.150336 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:42.227640 systemd[1]: run-containerd-runc-k8s.io-021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38-runc.Dmmuvp.mount: Deactivated successfully. Feb 12 19:23:43.151021 kubelet[1853]: E0212 19:23:43.150969 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:43.423861 systemd-networkd[1526]: lxca60e0f027827: Gained IPv6LL Feb 12 19:23:44.152406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120211816.mount: Deactivated successfully. Feb 12 19:23:44.152968 kubelet[1853]: E0212 19:23:44.152626 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:45.153068 kubelet[1853]: E0212 19:23:45.153018 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:46.153517 kubelet[1853]: E0212 19:23:46.153474 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:46.226512 env[1376]: time="2024-02-12T19:23:46.226464872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:46.236348 env[1376]: time="2024-02-12T19:23:46.236309111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:46.242142 env[1376]: time="2024-02-12T19:23:46.242107084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:46.247762 env[1376]: time="2024-02-12T19:23:46.247709641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:46.248406 env[1376]: time="2024-02-12T19:23:46.248377600Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 19:23:46.250540 env[1376]: time="2024-02-12T19:23:46.250506380Z" level=info msg="CreateContainer within sandbox \"021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:23:46.277043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447477811.mount: Deactivated successfully. Feb 12 19:23:46.283456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351963118.mount: Deactivated successfully. Feb 12 19:23:46.298748 env[1376]: time="2024-02-12T19:23:46.298686465Z" level=info msg="CreateContainer within sandbox \"021da9f3bb7fe75e6cdd23520ac586bd725ac308e16692ebedf03683df39aa38\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"16fbc66d05c0cf609e03747bbd70e1e71af9177cdbb9277c3f71b46b9567834a\"" Feb 12 19:23:46.299226 env[1376]: time="2024-02-12T19:23:46.299202882Z" level=info msg="StartContainer for \"16fbc66d05c0cf609e03747bbd70e1e71af9177cdbb9277c3f71b46b9567834a\"" Feb 12 19:23:46.316995 systemd[1]: Started cri-containerd-16fbc66d05c0cf609e03747bbd70e1e71af9177cdbb9277c3f71b46b9567834a.scope. Feb 12 19:23:46.349773 env[1376]: time="2024-02-12T19:23:46.349434438Z" level=info msg="StartContainer for \"16fbc66d05c0cf609e03747bbd70e1e71af9177cdbb9277c3f71b46b9567834a\" returns successfully" Feb 12 19:23:47.154151 kubelet[1853]: E0212 19:23:47.154113 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:48.155027 kubelet[1853]: E0212 19:23:48.154987 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:49.155651 kubelet[1853]: E0212 19:23:49.155617 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:50.127919 kubelet[1853]: E0212 19:23:50.127884 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:50.156170 kubelet[1853]: E0212 19:23:50.156140 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:51.156605 kubelet[1853]: E0212 19:23:51.156568 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:52.156753 kubelet[1853]: E0212 19:23:52.156690 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:53.157385 kubelet[1853]: E0212 19:23:53.157347 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:54.158177 kubelet[1853]: E0212 19:23:54.158131 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:55.159220 kubelet[1853]: E0212 19:23:55.159185 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:56.160598 kubelet[1853]: E0212 19:23:56.160564 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:56.164154 kubelet[1853]: I0212 19:23:56.164121 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.532442378 podCreationTimestamp="2024-02-12 19:23:40 +0000 UTC" firstStartedPulling="2024-02-12 19:23:41.617080475 +0000 UTC m=+32.448655329" lastFinishedPulling="2024-02-12 19:23:46.248727117 +0000 UTC m=+37.080302011" observedRunningTime="2024-02-12 19:23:47.361559747 +0000 UTC m=+38.193134641" watchObservedRunningTime="2024-02-12 19:23:56.16408906 +0000 UTC m=+46.995663954" Feb 12 19:23:56.164448 kubelet[1853]: I0212 19:23:56.164424 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:23:56.169257 systemd[1]: Created slice kubepods-besteffort-podaefa465c_3a41_4cac_884b_6d8955ece440.slice. Feb 12 19:23:56.184427 kubelet[1853]: I0212 19:23:56.184398 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ea421c6-77b5-4ea9-af12-d59bf1ba176b\" (UniqueName: \"kubernetes.io/nfs/aefa465c-3a41-4cac-884b-6d8955ece440-pvc-8ea421c6-77b5-4ea9-af12-d59bf1ba176b\") pod \"test-pod-1\" (UID: \"aefa465c-3a41-4cac-884b-6d8955ece440\") " pod="default/test-pod-1" Feb 12 19:23:56.184641 kubelet[1853]: I0212 19:23:56.184629 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6cnq\" (UniqueName: \"kubernetes.io/projected/aefa465c-3a41-4cac-884b-6d8955ece440-kube-api-access-c6cnq\") pod \"test-pod-1\" (UID: \"aefa465c-3a41-4cac-884b-6d8955ece440\") " pod="default/test-pod-1" Feb 12 19:23:56.498764 kernel: FS-Cache: Loaded Feb 12 19:23:56.598570 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:23:56.598680 kernel: RPC: Registered udp transport module. Feb 12 19:23:56.602521 kernel: RPC: Registered tcp transport module. Feb 12 19:23:56.607512 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:23:56.790767 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:23:57.035746 kernel: NFS: Registering the id_resolver key type Feb 12 19:23:57.035895 kernel: Key type id_resolver registered Feb 12 19:23:57.035922 kernel: Key type id_legacy registered Feb 12 19:23:57.160909 kubelet[1853]: E0212 19:23:57.160869 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:57.375981 nfsidmap[3162]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c97c98db58' Feb 12 19:23:57.480451 nfsidmap[3163]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c97c98db58' Feb 12 19:23:57.672652 env[1376]: time="2024-02-12T19:23:57.672326686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aefa465c-3a41-4cac-884b-6d8955ece440,Namespace:default,Attempt:0,}" Feb 12 19:23:57.735957 systemd-networkd[1526]: lxc9eeeb317ff64: Link UP Feb 12 19:23:57.747767 kernel: eth0: renamed from tmp4f517 Feb 12 19:23:57.762764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:23:57.762883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9eeeb317ff64: link becomes ready Feb 12 19:23:57.763123 systemd-networkd[1526]: lxc9eeeb317ff64: Gained carrier Feb 12 19:23:57.948366 env[1376]: time="2024-02-12T19:23:57.948199830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:57.948366 env[1376]: time="2024-02-12T19:23:57.948236066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:57.948366 env[1376]: time="2024-02-12T19:23:57.948246265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:57.949510 env[1376]: time="2024-02-12T19:23:57.948617591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f5176dad18766c63e8f26771ba3590c82de5038457cedd6bd5df1c14604d44a pid=3191 runtime=io.containerd.runc.v2 Feb 12 19:23:57.960358 systemd[1]: Started cri-containerd-4f5176dad18766c63e8f26771ba3590c82de5038457cedd6bd5df1c14604d44a.scope. Feb 12 19:23:57.990190 env[1376]: time="2024-02-12T19:23:57.990147904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aefa465c-3a41-4cac-884b-6d8955ece440,Namespace:default,Attempt:0,} returns sandbox id \"4f5176dad18766c63e8f26771ba3590c82de5038457cedd6bd5df1c14604d44a\"" Feb 12 19:23:57.991603 env[1376]: time="2024-02-12T19:23:57.991546813Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:23:58.161864 kubelet[1853]: E0212 19:23:58.161809 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:58.318442 env[1376]: time="2024-02-12T19:23:58.317985338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.324347 env[1376]: time="2024-02-12T19:23:58.324298480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.336464 env[1376]: time="2024-02-12T19:23:58.336417852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.341638 env[1376]: time="2024-02-12T19:23:58.341602298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.342215 env[1376]: time="2024-02-12T19:23:58.342185724Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:23:58.344441 env[1376]: time="2024-02-12T19:23:58.344400202Z" level=info msg="CreateContainer within sandbox \"4f5176dad18766c63e8f26771ba3590c82de5038457cedd6bd5df1c14604d44a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:23:58.378436 env[1376]: time="2024-02-12T19:23:58.378384893Z" level=info msg="CreateContainer within sandbox \"4f5176dad18766c63e8f26771ba3590c82de5038457cedd6bd5df1c14604d44a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0d4402dba11a585a72bb1bf07f24e6bffd02269afc2980bf685d6d30c6134c73\"" Feb 12 19:23:58.378927 env[1376]: time="2024-02-12T19:23:58.378900886Z" level=info msg="StartContainer for \"0d4402dba11a585a72bb1bf07f24e6bffd02269afc2980bf685d6d30c6134c73\"" Feb 12 19:23:58.392488 systemd[1]: Started cri-containerd-0d4402dba11a585a72bb1bf07f24e6bffd02269afc2980bf685d6d30c6134c73.scope. Feb 12 19:23:58.432012 env[1376]: time="2024-02-12T19:23:58.431935436Z" level=info msg="StartContainer for \"0d4402dba11a585a72bb1bf07f24e6bffd02269afc2980bf685d6d30c6134c73\" returns successfully" Feb 12 19:23:58.911890 systemd-networkd[1526]: lxc9eeeb317ff64: Gained IPv6LL Feb 12 19:23:59.162233 kubelet[1853]: E0212 19:23:59.162130 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:59.381315 kubelet[1853]: I0212 19:23:59.381278 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.030029129 podCreationTimestamp="2024-02-12 19:23:41 +0000 UTC" firstStartedPulling="2024-02-12 19:23:57.991300916 +0000 UTC m=+48.822875810" lastFinishedPulling="2024-02-12 19:23:58.342516734 +0000 UTC m=+49.174091588" observedRunningTime="2024-02-12 19:23:59.381140837 +0000 UTC m=+50.212715731" watchObservedRunningTime="2024-02-12 19:23:59.381244907 +0000 UTC m=+50.212819801" Feb 12 19:24:00.163242 kubelet[1853]: E0212 19:24:00.163200 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:01.163629 kubelet[1853]: E0212 19:24:01.163593 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:02.164247 kubelet[1853]: E0212 19:24:02.164208 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:03.165129 kubelet[1853]: E0212 19:24:03.165099 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:03.956531 systemd[1]: run-containerd-runc-k8s.io-b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24-runc.srgBDN.mount: Deactivated successfully. Feb 12 19:24:03.971020 env[1376]: time="2024-02-12T19:24:03.970958981Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:24:03.977402 env[1376]: time="2024-02-12T19:24:03.977364497Z" level=info msg="StopContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" with timeout 1 (s)" Feb 12 19:24:03.977708 env[1376]: time="2024-02-12T19:24:03.977685511Z" level=info msg="Stop container \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" with signal terminated" Feb 12 19:24:03.982945 systemd-networkd[1526]: lxc_health: Link DOWN Feb 12 19:24:03.982954 systemd-networkd[1526]: lxc_health: Lost carrier Feb 12 19:24:04.017138 systemd[1]: cri-containerd-b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24.scope: Deactivated successfully. Feb 12 19:24:04.017431 systemd[1]: cri-containerd-b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24.scope: Consumed 6.195s CPU time. Feb 12 19:24:04.032539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.166569 kubelet[1853]: E0212 19:24:04.166511 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:04.530541 env[1376]: time="2024-02-12T19:24:04.530491411Z" level=info msg="shim disconnected" id=b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24 Feb 12 19:24:04.530541 env[1376]: time="2024-02-12T19:24:04.530538647Z" level=warning msg="cleaning up after shim disconnected" id=b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24 namespace=k8s.io Feb 12 19:24:04.530766 env[1376]: time="2024-02-12T19:24:04.530548487Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.537088 env[1376]: time="2024-02-12T19:24:04.537028968Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3321 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.542900 env[1376]: time="2024-02-12T19:24:04.542860181Z" level=info msg="StopContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" returns successfully" Feb 12 19:24:04.543517 env[1376]: time="2024-02-12T19:24:04.543478731Z" level=info msg="StopPodSandbox for \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\"" Feb 12 19:24:04.543658 env[1376]: time="2024-02-12T19:24:04.543638479Z" level=info msg="Container to stop \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.543723 env[1376]: time="2024-02-12T19:24:04.543707593Z" level=info msg="Container to stop \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.543825 env[1376]: time="2024-02-12T19:24:04.543806905Z" level=info msg="Container to stop \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.543887 env[1376]: time="2024-02-12T19:24:04.543870900Z" level=info msg="Container to stop \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.543955 env[1376]: time="2024-02-12T19:24:04.543939575Z" level=info msg="Container to stop \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.545444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212-shm.mount: Deactivated successfully. Feb 12 19:24:04.550730 systemd[1]: cri-containerd-a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212.scope: Deactivated successfully. Feb 12 19:24:04.585825 env[1376]: time="2024-02-12T19:24:04.585774305Z" level=info msg="shim disconnected" id=a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212 Feb 12 19:24:04.586047 env[1376]: time="2024-02-12T19:24:04.586029525Z" level=warning msg="cleaning up after shim disconnected" id=a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212 namespace=k8s.io Feb 12 19:24:04.586126 env[1376]: time="2024-02-12T19:24:04.586113398Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.593251 env[1376]: time="2024-02-12T19:24:04.593211870Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3350 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.593712 env[1376]: time="2024-02-12T19:24:04.593682712Z" level=info msg="TearDown network for sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" successfully" Feb 12 19:24:04.593831 env[1376]: time="2024-02-12T19:24:04.593812741Z" level=info msg="StopPodSandbox for \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" returns successfully" Feb 12 19:24:04.729199 kubelet[1853]: I0212 19:24:04.729166 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cni-path\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729422 kubelet[1853]: I0212 19:24:04.729409 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-net\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729533 kubelet[1853]: I0212 19:24:04.729523 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-kernel\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729631 kubelet[1853]: I0212 19:24:04.729621 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-etc-cni-netd\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729725 kubelet[1853]: I0212 19:24:04.729715 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-xtables-lock\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729845 kubelet[1853]: I0212 19:24:04.729835 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-cgroup\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.729942 kubelet[1853]: I0212 19:24:04.729932 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-lib-modules\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730032 kubelet[1853]: I0212 19:24:04.730002 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.730126 kubelet[1853]: I0212 19:24:04.730113 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e0018c-55d3-49f9-991d-b25ef48b639f-clustermesh-secrets\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730233 kubelet[1853]: I0212 19:24:04.730223 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-hubble-tls\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730322 kubelet[1853]: I0212 19:24:04.730313 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-bpf-maps\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730408 kubelet[1853]: I0212 19:24:04.730399 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-config-path\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730498 kubelet[1853]: I0212 19:24:04.730490 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw649\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-kube-api-access-jw649\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730589 kubelet[1853]: I0212 19:24:04.730580 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-run\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730675 kubelet[1853]: I0212 19:24:04.730666 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-hostproc\") pod \"43e0018c-55d3-49f9-991d-b25ef48b639f\" (UID: \"43e0018c-55d3-49f9-991d-b25ef48b639f\") " Feb 12 19:24:04.730788 kubelet[1853]: I0212 19:24:04.730772 1853 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-lib-modules\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.730867 kubelet[1853]: I0212 19:24:04.729276 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cni-path" (OuterVolumeSpecName: "cni-path") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.730943 kubelet[1853]: I0212 19:24:04.730786 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.730996 kubelet[1853]: I0212 19:24:04.729466 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.731061 kubelet[1853]: I0212 19:24:04.729576 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.731112 kubelet[1853]: I0212 19:24:04.729680 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.731181 kubelet[1853]: I0212 19:24:04.729765 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.731231 kubelet[1853]: I0212 19:24:04.729898 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.731475 kubelet[1853]: W0212 19:24:04.731435 1853 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/43e0018c-55d3-49f9-991d-b25ef48b639f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:04.733365 kubelet[1853]: I0212 19:24:04.733334 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e0018c-55d3-49f9-991d-b25ef48b639f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:04.733441 kubelet[1853]: I0212 19:24:04.733399 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.733824 kubelet[1853]: I0212 19:24:04.733799 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:04.733955 kubelet[1853]: I0212 19:24:04.733940 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-hostproc" (OuterVolumeSpecName: "hostproc") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.735411 kubelet[1853]: I0212 19:24:04.735373 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:04.736370 kubelet[1853]: I0212 19:24:04.736346 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-kube-api-access-jw649" (OuterVolumeSpecName: "kube-api-access-jw649") pod "43e0018c-55d3-49f9-991d-b25ef48b639f" (UID: "43e0018c-55d3-49f9-991d-b25ef48b639f"). InnerVolumeSpecName "kube-api-access-jw649". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:04.831243 kubelet[1853]: I0212 19:24:04.831132 1853 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-hostproc\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831399 kubelet[1853]: I0212 19:24:04.831387 1853 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-bpf-maps\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831486 kubelet[1853]: I0212 19:24:04.831473 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-config-path\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831556 kubelet[1853]: I0212 19:24:04.831547 1853 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jw649\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-kube-api-access-jw649\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831617 kubelet[1853]: I0212 19:24:04.831609 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-run\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831680 kubelet[1853]: I0212 19:24:04.831664 1853 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cni-path\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831854 kubelet[1853]: I0212 19:24:04.831734 1853 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-net\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.831955 kubelet[1853]: I0212 19:24:04.831945 1853 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-host-proc-sys-kernel\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.832024 kubelet[1853]: I0212 19:24:04.832016 1853 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-etc-cni-netd\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.832084 kubelet[1853]: I0212 19:24:04.832077 1853 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e0018c-55d3-49f9-991d-b25ef48b639f-hubble-tls\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.832147 kubelet[1853]: I0212 19:24:04.832131 1853 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-xtables-lock\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.832207 kubelet[1853]: I0212 19:24:04.832199 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e0018c-55d3-49f9-991d-b25ef48b639f-cilium-cgroup\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.832278 kubelet[1853]: I0212 19:24:04.832270 1853 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e0018c-55d3-49f9-991d-b25ef48b639f-clustermesh-secrets\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:04.952151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.952247 systemd[1]: var-lib-kubelet-pods-43e0018c\x2d55d3\x2d49f9\x2d991d\x2db25ef48b639f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djw649.mount: Deactivated successfully. Feb 12 19:24:04.952313 systemd[1]: var-lib-kubelet-pods-43e0018c\x2d55d3\x2d49f9\x2d991d\x2db25ef48b639f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:04.952364 systemd[1]: var-lib-kubelet-pods-43e0018c\x2d55d3\x2d49f9\x2d991d\x2db25ef48b639f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:05.167512 kubelet[1853]: E0212 19:24:05.167409 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:05.212031 kubelet[1853]: E0212 19:24:05.212006 1853 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:05.386846 kubelet[1853]: I0212 19:24:05.386819 1853 scope.go:115] "RemoveContainer" containerID="b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24" Feb 12 19:24:05.389315 env[1376]: time="2024-02-12T19:24:05.388980857Z" level=info msg="RemoveContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\"" Feb 12 19:24:05.391023 systemd[1]: Removed slice kubepods-burstable-pod43e0018c_55d3_49f9_991d_b25ef48b639f.slice. Feb 12 19:24:05.391106 systemd[1]: kubepods-burstable-pod43e0018c_55d3_49f9_991d_b25ef48b639f.slice: Consumed 6.275s CPU time. Feb 12 19:24:05.400584 env[1376]: time="2024-02-12T19:24:05.400456317Z" level=info msg="RemoveContainer for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" returns successfully" Feb 12 19:24:05.400862 kubelet[1853]: I0212 19:24:05.400845 1853 scope.go:115] "RemoveContainer" containerID="eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f" Feb 12 19:24:05.401800 env[1376]: time="2024-02-12T19:24:05.401769054Z" level=info msg="RemoveContainer for \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\"" Feb 12 19:24:05.408551 env[1376]: time="2024-02-12T19:24:05.408514046Z" level=info msg="RemoveContainer for \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\" returns successfully" Feb 12 19:24:05.408752 kubelet[1853]: I0212 19:24:05.408714 1853 scope.go:115] "RemoveContainer" containerID="0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6" Feb 12 19:24:05.409892 env[1376]: time="2024-02-12T19:24:05.409863420Z" level=info msg="RemoveContainer for \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\"" Feb 12 19:24:05.416497 env[1376]: time="2024-02-12T19:24:05.416457503Z" level=info msg="RemoveContainer for \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\" returns successfully" Feb 12 19:24:05.416692 kubelet[1853]: I0212 19:24:05.416670 1853 scope.go:115] "RemoveContainer" containerID="3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd" Feb 12 19:24:05.417677 env[1376]: time="2024-02-12T19:24:05.417605813Z" level=info msg="RemoveContainer for \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\"" Feb 12 19:24:05.425058 env[1376]: time="2024-02-12T19:24:05.425018992Z" level=info msg="RemoveContainer for \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\" returns successfully" Feb 12 19:24:05.425259 kubelet[1853]: I0212 19:24:05.425220 1853 scope.go:115] "RemoveContainer" containerID="eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6" Feb 12 19:24:05.426640 env[1376]: time="2024-02-12T19:24:05.426395965Z" level=info msg="RemoveContainer for \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\"" Feb 12 19:24:05.435017 env[1376]: time="2024-02-12T19:24:05.434943775Z" level=info msg="RemoveContainer for \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\" returns successfully" Feb 12 19:24:05.435188 kubelet[1853]: I0212 19:24:05.435163 1853 scope.go:115] "RemoveContainer" containerID="b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24" Feb 12 19:24:05.435502 env[1376]: time="2024-02-12T19:24:05.435426897Z" level=error msg="ContainerStatus for \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\": not found" Feb 12 19:24:05.435638 kubelet[1853]: E0212 19:24:05.435616 1853 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\": not found" containerID="b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24" Feb 12 19:24:05.435707 kubelet[1853]: I0212 19:24:05.435653 1853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24} err="failed to get container status \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4d5fd6ed421a6740418412551d6a832a569b276275610ec65fca0a007ed3e24\": not found" Feb 12 19:24:05.435707 kubelet[1853]: I0212 19:24:05.435663 1853 scope.go:115] "RemoveContainer" containerID="eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f" Feb 12 19:24:05.435932 env[1376]: time="2024-02-12T19:24:05.435886341Z" level=error msg="ContainerStatus for \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\": not found" Feb 12 19:24:05.436178 kubelet[1853]: E0212 19:24:05.436166 1853 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\": not found" containerID="eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f" Feb 12 19:24:05.436294 kubelet[1853]: I0212 19:24:05.436284 1853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f} err="failed to get container status \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb28b150891060b6b8ae16f001cb7765d1e19fb4ce028903f0b4131e1f74e40f\": not found" Feb 12 19:24:05.436376 kubelet[1853]: I0212 19:24:05.436367 1853 scope.go:115] "RemoveContainer" containerID="0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6" Feb 12 19:24:05.436646 env[1376]: time="2024-02-12T19:24:05.436596645Z" level=error msg="ContainerStatus for \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\": not found" Feb 12 19:24:05.436848 kubelet[1853]: E0212 19:24:05.436833 1853 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\": not found" containerID="0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6" Feb 12 19:24:05.436968 kubelet[1853]: I0212 19:24:05.436958 1853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6} err="failed to get container status \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ef25bc82b7eebea8f073bbf6d24f7508c8d3f818ee478c8d98fefa6fb90e4f6\": not found" Feb 12 19:24:05.437049 kubelet[1853]: I0212 19:24:05.437039 1853 scope.go:115] "RemoveContainer" containerID="3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd" Feb 12 19:24:05.437328 env[1376]: time="2024-02-12T19:24:05.437288871Z" level=error msg="ContainerStatus for \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\": not found" Feb 12 19:24:05.437566 kubelet[1853]: E0212 19:24:05.437518 1853 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\": not found" containerID="3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd" Feb 12 19:24:05.437677 kubelet[1853]: I0212 19:24:05.437667 1853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd} err="failed to get container status \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c72974b083e28ece83c6e60a01bd3039c4f9be3a1770328c87fdd78a15d04bd\": not found" Feb 12 19:24:05.437765 kubelet[1853]: I0212 19:24:05.437754 1853 scope.go:115] "RemoveContainer" containerID="eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6" Feb 12 19:24:05.438061 env[1376]: time="2024-02-12T19:24:05.438006775Z" level=error msg="ContainerStatus for \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\": not found" Feb 12 19:24:05.438234 kubelet[1853]: E0212 19:24:05.438210 1853 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\": not found" containerID="eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6" Feb 12 19:24:05.438348 kubelet[1853]: I0212 19:24:05.438339 1853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6} err="failed to get container status \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb08068eff1c3a5699b9530230f482395ba530b3a22e9c5ed1561d633173a5c6\": not found" Feb 12 19:24:06.167965 kubelet[1853]: E0212 19:24:06.167929 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:06.281881 kubelet[1853]: I0212 19:24:06.281852 1853 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=43e0018c-55d3-49f9-991d-b25ef48b639f path="/var/lib/kubelet/pods/43e0018c-55d3-49f9-991d-b25ef48b639f/volumes" Feb 12 19:24:07.168796 kubelet[1853]: E0212 19:24:07.168753 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:07.528428 kubelet[1853]: I0212 19:24:07.528393 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:24:07.528613 kubelet[1853]: E0212 19:24:07.528444 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="apply-sysctl-overwrites" Feb 12 19:24:07.528613 kubelet[1853]: E0212 19:24:07.528455 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="clean-cilium-state" Feb 12 19:24:07.528613 kubelet[1853]: E0212 19:24:07.528461 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="cilium-agent" Feb 12 19:24:07.528613 kubelet[1853]: E0212 19:24:07.528468 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="mount-cgroup" Feb 12 19:24:07.528613 kubelet[1853]: E0212 19:24:07.528476 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="mount-bpf-fs" Feb 12 19:24:07.528613 kubelet[1853]: I0212 19:24:07.528496 1853 memory_manager.go:346] "RemoveStaleState removing state" podUID="43e0018c-55d3-49f9-991d-b25ef48b639f" containerName="cilium-agent" Feb 12 19:24:07.532858 systemd[1]: Created slice kubepods-besteffort-pod7abfa5c3_4657_48b9_b99d_c2c9863efbed.slice. Feb 12 19:24:07.541474 kubelet[1853]: W0212 19:24:07.541435 1853 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.20.25" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.25' and this object Feb 12 19:24:07.541474 kubelet[1853]: E0212 19:24:07.541473 1853 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.20.25" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.25' and this object Feb 12 19:24:07.543716 kubelet[1853]: I0212 19:24:07.543694 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfq7\" (UniqueName: \"kubernetes.io/projected/7abfa5c3-4657-48b9-b99d-c2c9863efbed-kube-api-access-ccfq7\") pod \"cilium-operator-574c4bb98d-65vgz\" (UID: \"7abfa5c3-4657-48b9-b99d-c2c9863efbed\") " pod="kube-system/cilium-operator-574c4bb98d-65vgz" Feb 12 19:24:07.543912 kubelet[1853]: I0212 19:24:07.543886 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7abfa5c3-4657-48b9-b99d-c2c9863efbed-cilium-config-path\") pod \"cilium-operator-574c4bb98d-65vgz\" (UID: \"7abfa5c3-4657-48b9-b99d-c2c9863efbed\") " pod="kube-system/cilium-operator-574c4bb98d-65vgz" Feb 12 19:24:07.555467 kubelet[1853]: I0212 19:24:07.555439 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:24:07.560178 systemd[1]: Created slice kubepods-burstable-pod005e3602_b4b7_480c_bd2e_b5987fda9c3f.slice. Feb 12 19:24:07.644244 kubelet[1853]: I0212 19:24:07.644215 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-net\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.644471 kubelet[1853]: I0212 19:24:07.644456 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-ipsec-secrets\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.644578 kubelet[1853]: I0212 19:24:07.644568 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-kernel\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.644678 kubelet[1853]: I0212 19:24:07.644665 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-cgroup\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.644794 kubelet[1853]: I0212 19:24:07.644783 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-lib-modules\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.644909 kubelet[1853]: I0212 19:24:07.644898 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645002 kubelet[1853]: I0212 19:24:07.644992 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hubble-tls\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645108 kubelet[1853]: I0212 19:24:07.645098 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-bpf-maps\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645196 kubelet[1853]: I0212 19:24:07.645187 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-run\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645291 kubelet[1853]: I0212 19:24:07.645282 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hostproc\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645387 kubelet[1853]: I0212 19:24:07.645376 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cni-path\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645497 kubelet[1853]: I0212 19:24:07.645486 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-etc-cni-netd\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645611 kubelet[1853]: I0212 19:24:07.645600 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-clustermesh-secrets\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645703 kubelet[1853]: I0212 19:24:07.645694 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrwh\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-kube-api-access-tbrwh\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:07.645794 kubelet[1853]: I0212 19:24:07.645784 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-xtables-lock\") pod \"cilium-jfhzw\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " pod="kube-system/cilium-jfhzw" Feb 12 19:24:08.169509 kubelet[1853]: E0212 19:24:08.169473 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:08.645603 kubelet[1853]: E0212 19:24:08.645572 1853 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:08.645932 kubelet[1853]: E0212 19:24:08.645918 1853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7abfa5c3-4657-48b9-b99d-c2c9863efbed-cilium-config-path podName:7abfa5c3-4657-48b9-b99d-c2c9863efbed nodeName:}" failed. No retries permitted until 2024-02-12 19:24:09.145893878 +0000 UTC m=+59.977468772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7abfa5c3-4657-48b9-b99d-c2c9863efbed-cilium-config-path") pod "cilium-operator-574c4bb98d-65vgz" (UID: "7abfa5c3-4657-48b9-b99d-c2c9863efbed") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:08.750571 kubelet[1853]: E0212 19:24:08.750526 1853 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:08.750690 kubelet[1853]: E0212 19:24:08.750599 1853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path podName:005e3602-b4b7-480c-bd2e-b5987fda9c3f nodeName:}" failed. No retries permitted until 2024-02-12 19:24:09.250579775 +0000 UTC m=+60.082154669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path") pod "cilium-jfhzw" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:09.170590 kubelet[1853]: E0212 19:24:09.170552 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:09.336776 env[1376]: time="2024-02-12T19:24:09.336632029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-65vgz,Uid:7abfa5c3-4657-48b9-b99d-c2c9863efbed,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:09.368896 env[1376]: time="2024-02-12T19:24:09.368821868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:09.369036 env[1376]: time="2024-02-12T19:24:09.368908262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:09.369036 env[1376]: time="2024-02-12T19:24:09.368936940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:09.369202 env[1376]: time="2024-02-12T19:24:09.369144605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c pid=3378 runtime=io.containerd.runc.v2 Feb 12 19:24:09.375324 env[1376]: time="2024-02-12T19:24:09.375289442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfhzw,Uid:005e3602-b4b7-480c-bd2e-b5987fda9c3f,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:09.386243 systemd[1]: run-containerd-runc-k8s.io-672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c-runc.rSok5y.mount: Deactivated successfully. Feb 12 19:24:09.390093 systemd[1]: Started cri-containerd-672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c.scope. Feb 12 19:24:09.423441 env[1376]: time="2024-02-12T19:24:09.423331219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-65vgz,Uid:7abfa5c3-4657-48b9-b99d-c2c9863efbed,Namespace:kube-system,Attempt:0,} returns sandbox id \"672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c\"" Feb 12 19:24:09.425819 env[1376]: time="2024-02-12T19:24:09.425778562Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:24:09.428765 env[1376]: time="2024-02-12T19:24:09.428662914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:09.428765 env[1376]: time="2024-02-12T19:24:09.428704511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:09.428765 env[1376]: time="2024-02-12T19:24:09.428715591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:09.429184 env[1376]: time="2024-02-12T19:24:09.429138240Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6 pid=3420 runtime=io.containerd.runc.v2 Feb 12 19:24:09.440267 systemd[1]: Started cri-containerd-f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6.scope. Feb 12 19:24:09.464988 env[1376]: time="2024-02-12T19:24:09.464941699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfhzw,Uid:005e3602-b4b7-480c-bd2e-b5987fda9c3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\"" Feb 12 19:24:09.467686 env[1376]: time="2024-02-12T19:24:09.467649744Z" level=info msg="CreateContainer within sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:09.502813 env[1376]: time="2024-02-12T19:24:09.502758573Z" level=info msg="CreateContainer within sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\"" Feb 12 19:24:09.503490 env[1376]: time="2024-02-12T19:24:09.503438604Z" level=info msg="StartContainer for \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\"" Feb 12 19:24:09.516902 systemd[1]: Started cri-containerd-972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1.scope. Feb 12 19:24:09.527135 systemd[1]: cri-containerd-972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1.scope: Deactivated successfully. Feb 12 19:24:09.527445 systemd[1]: Stopped cri-containerd-972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1.scope. Feb 12 19:24:09.576143 env[1376]: time="2024-02-12T19:24:09.576088766Z" level=info msg="shim disconnected" id=972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1 Feb 12 19:24:09.576143 env[1376]: time="2024-02-12T19:24:09.576139643Z" level=warning msg="cleaning up after shim disconnected" id=972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1 namespace=k8s.io Feb 12 19:24:09.576143 env[1376]: time="2024-02-12T19:24:09.576149962Z" level=info msg="cleaning up dead shim" Feb 12 19:24:09.583232 env[1376]: time="2024-02-12T19:24:09.583174895Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3477 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:24:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:24:09.583529 env[1376]: time="2024-02-12T19:24:09.583426637Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 12 19:24:09.584822 env[1376]: time="2024-02-12T19:24:09.584787019Z" level=error msg="Failed to pipe stderr of container \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\"" error="reading from a closed fifo" Feb 12 19:24:09.584968 env[1376]: time="2024-02-12T19:24:09.584939368Z" level=error msg="Failed to pipe stdout of container \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\"" error="reading from a closed fifo" Feb 12 19:24:09.591162 env[1376]: time="2024-02-12T19:24:09.591104004Z" level=error msg="StartContainer for \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:24:09.591748 kubelet[1853]: E0212 19:24:09.591370 1853 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1" Feb 12 19:24:09.591748 kubelet[1853]: E0212 19:24:09.591479 1853 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:24:09.591748 kubelet[1853]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:24:09.591748 kubelet[1853]: rm /hostbin/cilium-mount Feb 12 19:24:09.591988 kubelet[1853]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tbrwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jfhzw_kube-system(005e3602-b4b7-480c-bd2e-b5987fda9c3f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:24:09.592059 kubelet[1853]: E0212 19:24:09.591515 1853 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jfhzw" podUID=005e3602-b4b7-480c-bd2e-b5987fda9c3f Feb 12 19:24:10.128233 kubelet[1853]: E0212 19:24:10.128190 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:10.134543 env[1376]: time="2024-02-12T19:24:10.134504733Z" level=info msg="StopPodSandbox for \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\"" Feb 12 19:24:10.134653 env[1376]: time="2024-02-12T19:24:10.134588727Z" level=info msg="TearDown network for sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" successfully" Feb 12 19:24:10.134653 env[1376]: time="2024-02-12T19:24:10.134621405Z" level=info msg="StopPodSandbox for \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" returns successfully" Feb 12 19:24:10.135093 env[1376]: time="2024-02-12T19:24:10.135067853Z" level=info msg="RemovePodSandbox for \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\"" Feb 12 19:24:10.135211 env[1376]: time="2024-02-12T19:24:10.135178965Z" level=info msg="Forcibly stopping sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\"" Feb 12 19:24:10.135312 env[1376]: time="2024-02-12T19:24:10.135292957Z" level=info msg="TearDown network for sandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" successfully" Feb 12 19:24:10.170403 env[1376]: time="2024-02-12T19:24:10.170357800Z" level=info msg="RemovePodSandbox \"a33fdafa69d0b4c105766513530fb63893eda73effc115e60a9d9c0901c22212\" returns successfully" Feb 12 19:24:10.170813 kubelet[1853]: E0212 19:24:10.170789 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:10.213279 kubelet[1853]: E0212 19:24:10.213250 1853 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:10.410914 env[1376]: time="2024-02-12T19:24:10.410811214Z" level=info msg="StopPodSandbox for \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\"" Feb 12 19:24:10.410914 env[1376]: time="2024-02-12T19:24:10.410864650Z" level=info msg="Container to stop \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:10.412879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6-shm.mount: Deactivated successfully. Feb 12 19:24:10.419160 systemd[1]: cri-containerd-f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6.scope: Deactivated successfully. Feb 12 19:24:10.438526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6-rootfs.mount: Deactivated successfully. Feb 12 19:24:10.454691 env[1376]: time="2024-02-12T19:24:10.454639918Z" level=info msg="shim disconnected" id=f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6 Feb 12 19:24:10.454691 env[1376]: time="2024-02-12T19:24:10.454691234Z" level=warning msg="cleaning up after shim disconnected" id=f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6 namespace=k8s.io Feb 12 19:24:10.454910 env[1376]: time="2024-02-12T19:24:10.454700313Z" level=info msg="cleaning up dead shim" Feb 12 19:24:10.462159 env[1376]: time="2024-02-12T19:24:10.462107790Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3510 runtime=io.containerd.runc.v2\n" Feb 12 19:24:10.462426 env[1376]: time="2024-02-12T19:24:10.462393130Z" level=info msg="TearDown network for sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" successfully" Feb 12 19:24:10.462466 env[1376]: time="2024-02-12T19:24:10.462423448Z" level=info msg="StopPodSandbox for \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" returns successfully" Feb 12 19:24:10.565779 kubelet[1853]: I0212 19:24:10.565581 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-run\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.565779 kubelet[1853]: I0212 19:24:10.565636 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbrwh\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-kube-api-access-tbrwh\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.565779 kubelet[1853]: I0212 19:24:10.565658 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-cgroup\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.565779 kubelet[1853]: I0212 19:24:10.565721 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.565779 kubelet[1853]: I0212 19:24:10.565774 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565797 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-ipsec-secrets\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565815 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-kernel\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565924 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-clustermesh-secrets\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565945 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cni-path\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565962 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-etc-cni-netd\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566050 kubelet[1853]: I0212 19:24:10.565982 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-net\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566021 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-bpf-maps\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566038 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hostproc\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566055 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-lib-modules\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566082 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hubble-tls\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566102 1853 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-xtables-lock\") pod \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\" (UID: \"005e3602-b4b7-480c-bd2e-b5987fda9c3f\") " Feb 12 19:24:10.566185 kubelet[1853]: I0212 19:24:10.566130 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-run\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.566318 kubelet[1853]: I0212 19:24:10.566158 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.567768 kubelet[1853]: I0212 19:24:10.566508 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.567768 kubelet[1853]: W0212 19:24:10.566602 1853 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/005e3602-b4b7-480c-bd2e-b5987fda9c3f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:10.567768 kubelet[1853]: I0212 19:24:10.566758 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568344 kubelet[1853]: I0212 19:24:10.567985 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568344 kubelet[1853]: I0212 19:24:10.568047 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cni-path" (OuterVolumeSpecName: "cni-path") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568344 kubelet[1853]: I0212 19:24:10.568065 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568344 kubelet[1853]: I0212 19:24:10.568083 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568344 kubelet[1853]: I0212 19:24:10.568100 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.568496 kubelet[1853]: I0212 19:24:10.568116 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hostproc" (OuterVolumeSpecName: "hostproc") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.569294 kubelet[1853]: I0212 19:24:10.569271 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:10.571924 systemd[1]: var-lib-kubelet-pods-005e3602\x2db4b7\x2d480c\x2dbd2e\x2db5987fda9c3f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:10.573319 kubelet[1853]: I0212 19:24:10.573290 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:10.574789 systemd[1]: var-lib-kubelet-pods-005e3602\x2db4b7\x2d480c\x2dbd2e\x2db5987fda9c3f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:10.575916 kubelet[1853]: I0212 19:24:10.575880 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:10.576617 kubelet[1853]: I0212 19:24:10.576579 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:10.577921 kubelet[1853]: I0212 19:24:10.577886 1853 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-kube-api-access-tbrwh" (OuterVolumeSpecName: "kube-api-access-tbrwh") pod "005e3602-b4b7-480c-bd2e-b5987fda9c3f" (UID: "005e3602-b4b7-480c-bd2e-b5987fda9c3f"). InnerVolumeSpecName "kube-api-access-tbrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666262 1853 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-lib-modules\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666296 1853 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hubble-tls\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666307 1853 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-xtables-lock\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666318 1853 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tbrwh\" (UniqueName: \"kubernetes.io/projected/005e3602-b4b7-480c-bd2e-b5987fda9c3f-kube-api-access-tbrwh\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666328 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-ipsec-secrets\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666338 1853 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-kernel\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666348 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-cgroup\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.667930 kubelet[1853]: I0212 19:24:10.666357 1853 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cilium-config-path\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666367 1853 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005e3602-b4b7-480c-bd2e-b5987fda9c3f-clustermesh-secrets\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666375 1853 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-host-proc-sys-net\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666384 1853 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-bpf-maps\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666393 1853 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-hostproc\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666401 1853 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-cni-path\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:10.668198 kubelet[1853]: I0212 19:24:10.666410 1853 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005e3602-b4b7-480c-bd2e-b5987fda9c3f-etc-cni-netd\") on node \"10.200.20.25\" DevicePath \"\"" Feb 12 19:24:11.171854 kubelet[1853]: E0212 19:24:11.171806 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:11.339022 env[1376]: time="2024-02-12T19:24:11.335429445Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:11.344861 env[1376]: time="2024-02-12T19:24:11.344806076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:11.348073 env[1376]: time="2024-02-12T19:24:11.348040812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:11.348418 env[1376]: time="2024-02-12T19:24:11.348387188Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:24:11.351163 env[1376]: time="2024-02-12T19:24:11.351120719Z" level=info msg="CreateContainer within sandbox \"672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:24:11.361552 systemd[1]: var-lib-kubelet-pods-005e3602\x2db4b7\x2d480c\x2dbd2e\x2db5987fda9c3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtbrwh.mount: Deactivated successfully. Feb 12 19:24:11.361643 systemd[1]: var-lib-kubelet-pods-005e3602\x2db4b7\x2d480c\x2dbd2e\x2db5987fda9c3f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:11.380669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628344226.mount: Deactivated successfully. Feb 12 19:24:11.391187 env[1376]: time="2024-02-12T19:24:11.391140508Z" level=info msg="CreateContainer within sandbox \"672c40ed22420a0aa8c76c70dc3d4893ae19b123553e6ae21bb3d844d3527e8c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e603f2c4552a38c6f0f3c62331d4a4d6b474e2968484db39132e8d1cfd257898\"" Feb 12 19:24:11.391858 env[1376]: time="2024-02-12T19:24:11.391826020Z" level=info msg="StartContainer for \"e603f2c4552a38c6f0f3c62331d4a4d6b474e2968484db39132e8d1cfd257898\"" Feb 12 19:24:11.404764 systemd[1]: Started cri-containerd-e603f2c4552a38c6f0f3c62331d4a4d6b474e2968484db39132e8d1cfd257898.scope. Feb 12 19:24:11.414629 kubelet[1853]: I0212 19:24:11.414590 1853 scope.go:115] "RemoveContainer" containerID="972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1" Feb 12 19:24:11.418654 systemd[1]: Removed slice kubepods-burstable-pod005e3602_b4b7_480c_bd2e_b5987fda9c3f.slice. Feb 12 19:24:11.419497 env[1376]: time="2024-02-12T19:24:11.419394152Z" level=info msg="RemoveContainer for \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\"" Feb 12 19:24:11.429195 env[1376]: time="2024-02-12T19:24:11.428445405Z" level=info msg="RemoveContainer for \"972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1\" returns successfully" Feb 12 19:24:11.445402 env[1376]: time="2024-02-12T19:24:11.445356474Z" level=info msg="StartContainer for \"e603f2c4552a38c6f0f3c62331d4a4d6b474e2968484db39132e8d1cfd257898\" returns successfully" Feb 12 19:24:11.467841 kubelet[1853]: I0212 19:24:11.467791 1853 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:24:11.467985 kubelet[1853]: E0212 19:24:11.467884 1853 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="005e3602-b4b7-480c-bd2e-b5987fda9c3f" containerName="mount-cgroup" Feb 12 19:24:11.467985 kubelet[1853]: I0212 19:24:11.467909 1853 memory_manager.go:346] "RemoveStaleState removing state" podUID="005e3602-b4b7-480c-bd2e-b5987fda9c3f" containerName="mount-cgroup" Feb 12 19:24:11.472810 systemd[1]: Created slice kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice. Feb 12 19:24:11.570817 kubelet[1853]: I0212 19:24:11.570767 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-cilium-run\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570827 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-cilium-config-path\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570854 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-cilium-cgroup\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570875 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-cni-path\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570907 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-etc-cni-netd\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570931 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-lib-modules\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.570969 kubelet[1853]: I0212 19:24:11.570948 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-hostproc\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.570968 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-hubble-tls\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.571000 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w58v\" (UniqueName: \"kubernetes.io/projected/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-kube-api-access-2w58v\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.571024 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-bpf-maps\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.571044 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-xtables-lock\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.571073 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-clustermesh-secrets\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571117 kubelet[1853]: I0212 19:24:11.571094 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-cilium-ipsec-secrets\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571301 kubelet[1853]: I0212 19:24:11.571112 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-host-proc-sys-net\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.571301 kubelet[1853]: I0212 19:24:11.571142 1853 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e78a3c0-5f5d-43c8-83b6-0f4346db27b8-host-proc-sys-kernel\") pod \"cilium-mmnh4\" (UID: \"9e78a3c0-5f5d-43c8-83b6-0f4346db27b8\") " pod="kube-system/cilium-mmnh4" Feb 12 19:24:11.779384 env[1376]: time="2024-02-12T19:24:11.779332549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmnh4,Uid:9e78a3c0-5f5d-43c8-83b6-0f4346db27b8,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:11.807016 env[1376]: time="2024-02-12T19:24:11.806838965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:11.807016 env[1376]: time="2024-02-12T19:24:11.806875162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:11.807016 env[1376]: time="2024-02-12T19:24:11.806885362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:11.807300 env[1376]: time="2024-02-12T19:24:11.807248376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82 pid=3576 runtime=io.containerd.runc.v2 Feb 12 19:24:11.816998 systemd[1]: Started cri-containerd-841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82.scope. Feb 12 19:24:11.837821 env[1376]: time="2024-02-12T19:24:11.837782062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmnh4,Uid:9e78a3c0-5f5d-43c8-83b6-0f4346db27b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\"" Feb 12 19:24:11.840687 env[1376]: time="2024-02-12T19:24:11.840644544Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:11.875863 env[1376]: time="2024-02-12T19:24:11.875801430Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5\"" Feb 12 19:24:11.876728 env[1376]: time="2024-02-12T19:24:11.876690568Z" level=info msg="StartContainer for \"1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5\"" Feb 12 19:24:11.890051 systemd[1]: Started cri-containerd-1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5.scope. Feb 12 19:24:11.922792 systemd[1]: cri-containerd-1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5.scope: Deactivated successfully. Feb 12 19:24:11.924029 env[1376]: time="2024-02-12T19:24:11.922934326Z" level=info msg="StartContainer for \"1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5\" returns successfully" Feb 12 19:24:12.172846 kubelet[1853]: E0212 19:24:12.172171 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:12.226018 env[1376]: time="2024-02-12T19:24:12.225970729Z" level=info msg="shim disconnected" id=1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5 Feb 12 19:24:12.226018 env[1376]: time="2024-02-12T19:24:12.226014726Z" level=warning msg="cleaning up after shim disconnected" id=1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5 namespace=k8s.io Feb 12 19:24:12.226018 env[1376]: time="2024-02-12T19:24:12.226023685Z" level=info msg="cleaning up dead shim" Feb 12 19:24:12.232474 env[1376]: time="2024-02-12T19:24:12.232426130Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" Feb 12 19:24:12.281692 kubelet[1853]: I0212 19:24:12.281433 1853 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=005e3602-b4b7-480c-bd2e-b5987fda9c3f path="/var/lib/kubelet/pods/005e3602-b4b7-480c-bd2e-b5987fda9c3f/volumes" Feb 12 19:24:12.431153 env[1376]: time="2024-02-12T19:24:12.430697951Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:24:12.440004 kubelet[1853]: I0212 19:24:12.439979 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-65vgz" podStartSLOduration=3.516346129 podCreationTimestamp="2024-02-12 19:24:07 +0000 UTC" firstStartedPulling="2024-02-12 19:24:09.425021337 +0000 UTC m=+60.256596191" lastFinishedPulling="2024-02-12 19:24:11.348618812 +0000 UTC m=+62.180193706" observedRunningTime="2024-02-12 19:24:12.439437038 +0000 UTC m=+63.271011932" watchObservedRunningTime="2024-02-12 19:24:12.439943644 +0000 UTC m=+63.271518538" Feb 12 19:24:12.453875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984104911.mount: Deactivated successfully. Feb 12 19:24:12.457266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371565023.mount: Deactivated successfully. Feb 12 19:24:12.467859 env[1376]: time="2024-02-12T19:24:12.467816632Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d\"" Feb 12 19:24:12.468688 env[1376]: time="2024-02-12T19:24:12.468646495Z" level=info msg="StartContainer for \"3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d\"" Feb 12 19:24:12.484373 systemd[1]: Started cri-containerd-3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d.scope. Feb 12 19:24:12.510286 systemd[1]: cri-containerd-3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d.scope: Deactivated successfully. Feb 12 19:24:12.515215 env[1376]: time="2024-02-12T19:24:12.515167337Z" level=info msg="StartContainer for \"3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d\" returns successfully" Feb 12 19:24:12.541369 env[1376]: time="2024-02-12T19:24:12.541325842Z" level=info msg="shim disconnected" id=3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d Feb 12 19:24:12.541661 env[1376]: time="2024-02-12T19:24:12.541643180Z" level=warning msg="cleaning up after shim disconnected" id=3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d namespace=k8s.io Feb 12 19:24:12.541754 env[1376]: time="2024-02-12T19:24:12.541722655Z" level=info msg="cleaning up dead shim" Feb 12 19:24:12.548627 env[1376]: time="2024-02-12T19:24:12.548588109Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\n" Feb 12 19:24:12.680778 kubelet[1853]: W0212 19:24:12.680598 1853 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005e3602_b4b7_480c_bd2e_b5987fda9c3f.slice/cri-containerd-972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1.scope WatchSource:0}: container "972f1bb303d7946f643e629974c7a4e4232bdedc0383d7e4e76ab211e8bd55e1" in namespace "k8s.io": not found Feb 12 19:24:12.779602 kubelet[1853]: I0212 19:24:12.779568 1853 setters.go:548] "Node became not ready" node="10.200.20.25" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:24:12.779511873 +0000 UTC m=+63.611086767 LastTransitionTime:2024-02-12 19:24:12.779511873 +0000 UTC m=+63.611086767 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:24:13.173022 kubelet[1853]: E0212 19:24:13.172921 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:13.434338 env[1376]: time="2024-02-12T19:24:13.434299513Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:24:13.460053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958768999.mount: Deactivated successfully. Feb 12 19:24:13.464483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383264216.mount: Deactivated successfully. Feb 12 19:24:13.477247 env[1376]: time="2024-02-12T19:24:13.477189258Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910\"" Feb 12 19:24:13.478060 env[1376]: time="2024-02-12T19:24:13.478035721Z" level=info msg="StartContainer for \"6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910\"" Feb 12 19:24:13.496314 systemd[1]: Started cri-containerd-6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910.scope. Feb 12 19:24:13.522889 systemd[1]: cri-containerd-6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910.scope: Deactivated successfully. Feb 12 19:24:13.525094 env[1376]: time="2024-02-12T19:24:13.524888363Z" level=info msg="StartContainer for \"6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910\" returns successfully" Feb 12 19:24:13.556893 env[1376]: time="2024-02-12T19:24:13.556847875Z" level=info msg="shim disconnected" id=6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910 Feb 12 19:24:13.557164 env[1376]: time="2024-02-12T19:24:13.557146095Z" level=warning msg="cleaning up after shim disconnected" id=6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910 namespace=k8s.io Feb 12 19:24:13.557268 env[1376]: time="2024-02-12T19:24:13.557253368Z" level=info msg="cleaning up dead shim" Feb 12 19:24:13.564177 env[1376]: time="2024-02-12T19:24:13.564136310Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3778 runtime=io.containerd.runc.v2\n" Feb 12 19:24:14.174061 kubelet[1853]: E0212 19:24:14.174012 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:14.437359 env[1376]: time="2024-02-12T19:24:14.437325141Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:24:14.466553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203613512.mount: Deactivated successfully. Feb 12 19:24:14.471213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906489891.mount: Deactivated successfully. Feb 12 19:24:14.486169 env[1376]: time="2024-02-12T19:24:14.486126075Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa\"" Feb 12 19:24:14.488640 env[1376]: time="2024-02-12T19:24:14.488610032Z" level=info msg="StartContainer for \"fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa\"" Feb 12 19:24:14.502225 systemd[1]: Started cri-containerd-fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa.scope. Feb 12 19:24:14.525082 systemd[1]: cri-containerd-fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa.scope: Deactivated successfully. Feb 12 19:24:14.526931 env[1376]: time="2024-02-12T19:24:14.526728184Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice/cri-containerd-fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa.scope/memory.events\": no such file or directory" Feb 12 19:24:14.531931 env[1376]: time="2024-02-12T19:24:14.531892967Z" level=info msg="StartContainer for \"fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa\" returns successfully" Feb 12 19:24:14.555956 env[1376]: time="2024-02-12T19:24:14.555902879Z" level=info msg="shim disconnected" id=fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa Feb 12 19:24:14.555956 env[1376]: time="2024-02-12T19:24:14.555956955Z" level=warning msg="cleaning up after shim disconnected" id=fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa namespace=k8s.io Feb 12 19:24:14.556169 env[1376]: time="2024-02-12T19:24:14.555967635Z" level=info msg="cleaning up dead shim" Feb 12 19:24:14.562832 env[1376]: time="2024-02-12T19:24:14.562787269Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3832 runtime=io.containerd.runc.v2\n" Feb 12 19:24:15.174723 kubelet[1853]: E0212 19:24:15.174676 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:15.214728 kubelet[1853]: E0212 19:24:15.214705 1853 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:15.440896 env[1376]: time="2024-02-12T19:24:15.440861362Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:24:15.465675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215465755.mount: Deactivated successfully. Feb 12 19:24:15.469234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278971709.mount: Deactivated successfully. Feb 12 19:24:15.481772 env[1376]: time="2024-02-12T19:24:15.481704506Z" level=info msg="CreateContainer within sandbox \"841766bc52255859ee2c5787d67992fb6a62a9727d62e41ed4fc7a15ba6a9a82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89\"" Feb 12 19:24:15.482400 env[1376]: time="2024-02-12T19:24:15.482371343Z" level=info msg="StartContainer for \"b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89\"" Feb 12 19:24:15.498273 systemd[1]: Started cri-containerd-b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89.scope. Feb 12 19:24:15.528475 env[1376]: time="2024-02-12T19:24:15.528407434Z" level=info msg="StartContainer for \"b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89\" returns successfully" Feb 12 19:24:15.802505 kubelet[1853]: W0212 19:24:15.802393 1853 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice/cri-containerd-1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5.scope WatchSource:0}: task 1afc96e930ef94782920840c474f10671740de2d305dd35c7fc8d4d9b84a52b5 not found: not found Feb 12 19:24:15.834762 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:24:16.175785 kubelet[1853]: E0212 19:24:16.175678 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:16.458242 kubelet[1853]: I0212 19:24:16.458212 1853 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mmnh4" podStartSLOduration=5.458179016 podCreationTimestamp="2024-02-12 19:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:16.456776024 +0000 UTC m=+67.288350918" watchObservedRunningTime="2024-02-12 19:24:16.458179016 +0000 UTC m=+67.289753870" Feb 12 19:24:17.175845 kubelet[1853]: E0212 19:24:17.175798 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:17.524023 systemd[1]: run-containerd-runc-k8s.io-b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89-runc.yRKU9y.mount: Deactivated successfully. Feb 12 19:24:18.176095 kubelet[1853]: E0212 19:24:18.176061 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:18.338897 systemd-networkd[1526]: lxc_health: Link UP Feb 12 19:24:18.351211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:24:18.349772 systemd-networkd[1526]: lxc_health: Gained carrier Feb 12 19:24:18.914429 kubelet[1853]: W0212 19:24:18.914383 1853 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice/cri-containerd-3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d.scope WatchSource:0}: task 3fd9c6d0b924fe83f21b67ab1841bb52eb1628e27c0eff17796c7320ad79260d not found: not found Feb 12 19:24:19.177417 kubelet[1853]: E0212 19:24:19.177315 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:19.670394 systemd[1]: run-containerd-runc-k8s.io-b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89-runc.w23QfC.mount: Deactivated successfully. Feb 12 19:24:20.160865 systemd-networkd[1526]: lxc_health: Gained IPv6LL Feb 12 19:24:20.177847 kubelet[1853]: E0212 19:24:20.177811 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:21.178836 kubelet[1853]: E0212 19:24:21.178802 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:22.022085 kubelet[1853]: W0212 19:24:22.021949 1853 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice/cri-containerd-6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910.scope WatchSource:0}: task 6e0043e41079175c0b5e1c9a7319900acee3803d1403daba9831c6ef1292a910 not found: not found Feb 12 19:24:22.179791 kubelet[1853]: E0212 19:24:22.179711 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:23.180864 kubelet[1853]: E0212 19:24:23.180828 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:24.047231 systemd[1]: run-containerd-runc-k8s.io-b23bae44938a5e2da3f916a2f8e8239f80a4bf74f13534fd4deed2ad0f904b89-runc.4CPBIc.mount: Deactivated successfully. Feb 12 19:24:24.181692 kubelet[1853]: E0212 19:24:24.181634 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:25.128753 kubelet[1853]: W0212 19:24:25.128703 1853 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e78a3c0_5f5d_43c8_83b6_0f4346db27b8.slice/cri-containerd-fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa.scope WatchSource:0}: task fdc9a2858c6dcaeded4c7847e82dacae88c47f2f23ceba258e4258729af1f9fa not found: not found Feb 12 19:24:25.182665 kubelet[1853]: E0212 19:24:25.182628 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:26.182992 kubelet[1853]: E0212 19:24:26.182956 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:27.183211 kubelet[1853]: E0212 19:24:27.183168 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:28.183573 kubelet[1853]: E0212 19:24:28.183534 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:29.183829 kubelet[1853]: E0212 19:24:29.183800 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:30.128383 kubelet[1853]: E0212 19:24:30.128340 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:30.184800 kubelet[1853]: E0212 19:24:30.184765 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:31.185687 kubelet[1853]: E0212 19:24:31.185649 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:32.185944 kubelet[1853]: E0212 19:24:32.185910 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:33.186536 kubelet[1853]: E0212 19:24:33.186507 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:34.187075 kubelet[1853]: E0212 19:24:34.187023 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:35.187394 kubelet[1853]: E0212 19:24:35.187362 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:36.188179 kubelet[1853]: E0212 19:24:36.188147 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:37.189537 kubelet[1853]: E0212 19:24:37.189497 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:38.190676 kubelet[1853]: E0212 19:24:38.190633 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:38.847982 kubelet[1853]: E0212 19:24:38.847939 1853 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.4:54262->10.200.20.26:2379: read: connection timed out" Feb 12 19:24:39.191562 kubelet[1853]: E0212 19:24:39.191535 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:40.192367 kubelet[1853]: E0212 19:24:40.192340 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:41.194110 kubelet[1853]: E0212 19:24:41.194074 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:42.194975 kubelet[1853]: E0212 19:24:42.194939 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:43.195758 kubelet[1853]: E0212 19:24:43.195721 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:44.197200 kubelet[1853]: E0212 19:24:44.197166 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:45.197643 kubelet[1853]: E0212 19:24:45.197603 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:46.199079 kubelet[1853]: E0212 19:24:46.199038 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:47.199576 kubelet[1853]: E0212 19:24:47.199544 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:48.200820 kubelet[1853]: E0212 19:24:48.200787 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:48.848857 kubelet[1853]: E0212 19:24:48.848830 1853 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:24:49.201745 kubelet[1853]: E0212 19:24:49.201600 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:50.127859 kubelet[1853]: E0212 19:24:50.127825 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:50.202274 kubelet[1853]: E0212 19:24:50.202240 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:51.203203 kubelet[1853]: E0212 19:24:51.203173 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:52.203836 kubelet[1853]: E0212 19:24:52.203797 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:53.158459 kubelet[1853]: E0212 19:24:53.158425 1853 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:24:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:24:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:24:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:24:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88\\\",\\\"registry.k8s.io/kube-proxy:v1.27.10\\\"],\\\"sizeBytes\\\":23037360},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"10.200.20.25\": Patch \"https://10.200.20.4:6443/api/v1/nodes/10.200.20.25/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:24:53.204925 kubelet[1853]: E0212 19:24:53.204893 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:54.205329 kubelet[1853]: E0212 19:24:54.205293 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:55.206215 kubelet[1853]: E0212 19:24:55.206183 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:56.207496 kubelet[1853]: E0212 19:24:56.207456 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:57.207723 kubelet[1853]: E0212 19:24:57.207687 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:58.208769 kubelet[1853]: E0212 19:24:58.208720 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:58.849584 kubelet[1853]: E0212 19:24:58.849553 1853 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:24:59.208963 kubelet[1853]: E0212 19:24:59.208926 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:00.210371 kubelet[1853]: E0212 19:25:00.210342 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:01.211366 kubelet[1853]: E0212 19:25:01.211333 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:02.211979 kubelet[1853]: E0212 19:25:02.211936 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:03.159372 kubelet[1853]: E0212 19:25:03.159339 1853 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.25\": Get \"https://10.200.20.4:6443/api/v1/nodes/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:03.212499 kubelet[1853]: E0212 19:25:03.212466 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:04.213283 kubelet[1853]: E0212 19:25:04.213250 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:05.214190 kubelet[1853]: E0212 19:25:05.214155 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:06.215072 kubelet[1853]: E0212 19:25:06.215027 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:07.215786 kubelet[1853]: E0212 19:25:07.215744 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:08.216763 kubelet[1853]: E0212 19:25:08.216717 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:08.850594 kubelet[1853]: E0212 19:25:08.850558 1853 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:09.217643 kubelet[1853]: E0212 19:25:09.217601 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:10.128278 kubelet[1853]: E0212 19:25:10.128248 1853 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:10.173767 env[1376]: time="2024-02-12T19:25:10.173657328Z" level=info msg="StopPodSandbox for \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\"" Feb 12 19:25:10.174103 env[1376]: time="2024-02-12T19:25:10.173778804Z" level=info msg="TearDown network for sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" successfully" Feb 12 19:25:10.174103 env[1376]: time="2024-02-12T19:25:10.173812643Z" level=info msg="StopPodSandbox for \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" returns successfully" Feb 12 19:25:10.174536 env[1376]: time="2024-02-12T19:25:10.174512580Z" level=info msg="RemovePodSandbox for \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\"" Feb 12 19:25:10.174651 env[1376]: time="2024-02-12T19:25:10.174618857Z" level=info msg="Forcibly stopping sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\"" Feb 12 19:25:10.174773 env[1376]: time="2024-02-12T19:25:10.174732773Z" level=info msg="TearDown network for sandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" successfully" Feb 12 19:25:10.187339 env[1376]: time="2024-02-12T19:25:10.187298405Z" level=info msg="RemovePodSandbox \"f667cace673f3fd4b933b6d83e99f6389383794b314e638cf04d2b4a332016d6\" returns successfully" Feb 12 19:25:10.218597 kubelet[1853]: E0212 19:25:10.218565 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:10.336134 update_engine[1367]: I0212 19:25:10.335817 1367 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 19:25:10.336134 update_engine[1367]: I0212 19:25:10.335851 1367 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 19:25:10.336134 update_engine[1367]: I0212 19:25:10.335961 1367 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 19:25:10.338263 update_engine[1367]: I0212 19:25:10.338035 1367 omaha_request_params.cc:62] Current group set to lts Feb 12 19:25:10.338263 update_engine[1367]: I0212 19:25:10.338125 1367 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 19:25:10.338263 update_engine[1367]: I0212 19:25:10.338130 1367 update_attempter.cc:643] Scheduling an action processor start. Feb 12 19:25:10.338263 update_engine[1367]: I0212 19:25:10.338145 1367 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:25:10.338263 update_engine[1367]: I0212 19:25:10.338164 1367 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 19:25:10.338520 locksmithd[1458]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 19:25:10.385377 update_engine[1367]: I0212 19:25:10.385277 1367 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:25:10.385377 update_engine[1367]: I0212 19:25:10.385305 1367 omaha_request_action.cc:271] Request: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: Feb 12 19:25:10.385377 update_engine[1367]: I0212 19:25:10.385310 1367 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:25:10.387027 update_engine[1367]: I0212 19:25:10.387004 1367 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:25:10.387213 update_engine[1367]: I0212 19:25:10.387197 1367 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:25:10.440650 update_engine[1367]: E0212 19:25:10.440620 1367 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:25:10.440782 update_engine[1367]: I0212 19:25:10.440714 1367 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 19:25:11.219757 kubelet[1853]: E0212 19:25:11.219558 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:12.220559 kubelet[1853]: E0212 19:25:12.220528 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:13.160731 kubelet[1853]: E0212 19:25:13.160700 1853 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.25\": Get \"https://10.200.20.4:6443/api/v1/nodes/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:13.221149 kubelet[1853]: E0212 19:25:13.221106 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:14.222046 kubelet[1853]: E0212 19:25:14.222012 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:15.222530 kubelet[1853]: E0212 19:25:15.222491 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:16.223468 kubelet[1853]: E0212 19:25:16.223436 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:17.224467 kubelet[1853]: E0212 19:25:17.224433 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:18.225081 kubelet[1853]: E0212 19:25:18.225047 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:18.851102 kubelet[1853]: E0212 19:25:18.851065 1853 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:18.851310 kubelet[1853]: I0212 19:25:18.851299 1853 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 12 19:25:19.225477 kubelet[1853]: E0212 19:25:19.225450 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:20.226476 kubelet[1853]: E0212 19:25:20.226445 1853 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:20.315693 update_engine[1367]: I0212 19:25:20.315639 1367 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:25:20.316071 update_engine[1367]: I0212 19:25:20.315907 1367 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:25:20.316141 update_engine[1367]: I0212 19:25:20.316099 1367 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:25:20.398524 update_engine[1367]: E0212 19:25:20.398489 1367 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:25:20.398660 update_engine[1367]: I0212 19:25:20.398592 1367 libcurl_http_fetcher.cc:283] No HTTP response, retry 2